aboutsummaryrefslogtreecommitdiff
path: root/chapter
diff options
context:
space:
mode:
authorHeather Miller <heather.miller@epfl.ch>2016-12-18 15:59:07 -0500
committerGitHub <noreply@github.com>2016-12-18 15:59:07 -0500
commitb8ea0136877b83801707f85ca902ce033fffe360 (patch)
tree0405d53412ce19830d1550f08d4879148d79201f /chapter
parent313f5c7bdd02346683b376f54301792e9f2e48f5 (diff)
parent5b299d189255e04ca3481a15e15b2a2e10b29b42 (diff)
Merge branch 'master' into Jingjing-Abhilash-bigdata-v2Jingjing-Abhilash-bigdata-v2
Diffstat (limited to 'chapter')
-rw-r--r--chapter/1/figures/grpc-benchmark.pngbin0 -> 17014 bytes
-rw-r--r--chapter/1/figures/grpc-client-transport-handler.pngbin0 -> 67834 bytes
-rw-r--r--chapter/1/figures/grpc-cross-language.pngbin0 -> 27394 bytes
-rw-r--r--chapter/1/figures/grpc-googleapis.pngbin0 -> 33354 bytes
-rw-r--r--chapter/1/figures/grpc-languages.pngbin0 -> 47003 bytes
-rw-r--r--chapter/1/figures/grpc-server-transport-handler.pngbin0 -> 60913 bytes
-rw-r--r--chapter/1/figures/hello-world-client.pngbin0 -> 30161 bytes
-rw-r--r--chapter/1/figures/hello-world-server.pngbin0 -> 13005 bytes
-rw-r--r--chapter/1/figures/http2-frame.pngbin0 -> 12057 bytes
-rw-r--r--chapter/1/figures/http2-stream-lifecycle.pngbin0 -> 49038 bytes
-rw-r--r--chapter/1/figures/protobuf-types.pngbin0 -> 19941 bytes
-rw-r--r--chapter/1/gRPC.md323
-rw-r--r--chapter/1/rpc.md378
-rw-r--r--chapter/2/futures.md602
-rw-r--r--chapter/2/images/1.pngbin0 -> 41235 bytes
-rw-r--r--chapter/2/images/15.pngbin0 -> 48459 bytes
-rw-r--r--chapter/2/images/5.pngbin0 -> 20821 bytes
-rw-r--r--chapter/2/images/6.pngbin0 -> 19123 bytes
-rw-r--r--chapter/2/images/7.pngbin0 -> 30068 bytes
-rw-r--r--chapter/2/images/8.pngbin0 -> 13899 bytes
-rw-r--r--chapter/2/images/9.pngbin0 -> 6463 bytes
-rw-r--r--chapter/2/images/p-1.pngbin0 -> 39600 bytes
-rw-r--r--chapter/2/images/p-1.svg4
-rw-r--r--chapter/2/images/p-2.pngbin0 -> 40084 bytes
-rw-r--r--chapter/2/images/p-2.svg4
-rw-r--r--chapter/3/E_account_spreadsheet_vats.pngbin0 -> 183811 bytes
-rw-r--r--chapter/3/E_vat.pngbin0 -> 53914 bytes
-rw-r--r--chapter/3/message-passing.md462
-rw-r--r--chapter/3/sentinel_nodes.pngbin0 -> 157837 bytes
-rw-r--r--chapter/3/supervision_tree.pngbin0 -> 143187 bytes
-rw-r--r--chapter/6/acidic-to-basic-how-the-database-ph-has-changed.md182
-rw-r--r--chapter/6/being-consistent.md82
-rw-r--r--chapter/6/consistency-crdts.md11
-rw-r--r--chapter/6/resources/partitioned-network.jpgbin0 -> 24303 bytes
-rw-r--r--chapter/7/langs-consistency.md625
-rw-r--r--chapter/8/big-data.md2
36 files changed, 2646 insertions, 29 deletions
diff --git a/chapter/1/figures/grpc-benchmark.png b/chapter/1/figures/grpc-benchmark.png
new file mode 100644
index 0000000..9f39c71
--- /dev/null
+++ b/chapter/1/figures/grpc-benchmark.png
Binary files differ
diff --git a/chapter/1/figures/grpc-client-transport-handler.png b/chapter/1/figures/grpc-client-transport-handler.png
new file mode 100644
index 0000000..edd5236
--- /dev/null
+++ b/chapter/1/figures/grpc-client-transport-handler.png
Binary files differ
diff --git a/chapter/1/figures/grpc-cross-language.png b/chapter/1/figures/grpc-cross-language.png
new file mode 100644
index 0000000..c600f67
--- /dev/null
+++ b/chapter/1/figures/grpc-cross-language.png
Binary files differ
diff --git a/chapter/1/figures/grpc-googleapis.png b/chapter/1/figures/grpc-googleapis.png
new file mode 100644
index 0000000..62718e5
--- /dev/null
+++ b/chapter/1/figures/grpc-googleapis.png
Binary files differ
diff --git a/chapter/1/figures/grpc-languages.png b/chapter/1/figures/grpc-languages.png
new file mode 100644
index 0000000..1f1c50d
--- /dev/null
+++ b/chapter/1/figures/grpc-languages.png
Binary files differ
diff --git a/chapter/1/figures/grpc-server-transport-handler.png b/chapter/1/figures/grpc-server-transport-handler.png
new file mode 100644
index 0000000..fe895c0
--- /dev/null
+++ b/chapter/1/figures/grpc-server-transport-handler.png
Binary files differ
diff --git a/chapter/1/figures/hello-world-client.png b/chapter/1/figures/hello-world-client.png
new file mode 100644
index 0000000..c4cf7d4
--- /dev/null
+++ b/chapter/1/figures/hello-world-client.png
Binary files differ
diff --git a/chapter/1/figures/hello-world-server.png b/chapter/1/figures/hello-world-server.png
new file mode 100644
index 0000000..a51554b
--- /dev/null
+++ b/chapter/1/figures/hello-world-server.png
Binary files differ
diff --git a/chapter/1/figures/http2-frame.png b/chapter/1/figures/http2-frame.png
new file mode 100644
index 0000000..59d6ed5
--- /dev/null
+++ b/chapter/1/figures/http2-frame.png
Binary files differ
diff --git a/chapter/1/figures/http2-stream-lifecycle.png b/chapter/1/figures/http2-stream-lifecycle.png
new file mode 100644
index 0000000..87333cb
--- /dev/null
+++ b/chapter/1/figures/http2-stream-lifecycle.png
Binary files differ
diff --git a/chapter/1/figures/protobuf-types.png b/chapter/1/figures/protobuf-types.png
new file mode 100644
index 0000000..aaf3a1e
--- /dev/null
+++ b/chapter/1/figures/protobuf-types.png
Binary files differ
diff --git a/chapter/1/gRPC.md b/chapter/1/gRPC.md
new file mode 100644
index 0000000..f6c47b7
--- /dev/null
+++ b/chapter/1/gRPC.md
@@ -0,0 +1,323 @@
+---
+layout: page
+title: "gRPC"
+by: "Paul Grosu (Northeastern U.), Muzammil Abdul Rehman (Northeastern U.), Eric Anderson (Google, Inc.), Vijay Pai (Google, Inc.), and Heather Miller (Northeastern U.)"
+---
+
+<h1>
+<p align="center">gRPC</p>
+</h1>
+
+<h4><em>
+<p align="center">Paul Grosu (Northeastern U.), Muzammil Abdul Rehman (Northeastern U.), Eric Anderson (Google, Inc.), Vijay Pai (Google, Inc.), and Heather Miller (Northeastern U.)</p>
+</em></h4>
+
+<hr>
+
+<h3><em><p align="center">Abstract</p></em></h3>
+
+<em>gRPC has been built from a collaboration between Google and Square as a public replacement of Stubby, ARCWire and Sake {% cite Apigee %}. The gRPC framework is a form of an Actor Model based on an IDL (Interface Description Language), which is defined via the Protocol Buffer message format. With the introduction of HTTP/2 the internal Google Stubby and Square Sake frameworks are now been made available to the public. By working on top of the HTTP/2 protocol, gRPC enables messages to be multiplexed and compressed bi-directionally as premptive streams for maximizing capacity of any microservices ecosystem. Google has also a new approach to public projects, where instead of just releasing a paper describing the concepts will now also provide the implementation of how to properly interpret the standard.
+</em>
+
+<h3><em>Introduction</em></h3>
+
+In order to understand gRPC and the flexibity of enabling a microservices ecosystem to become into a Reactive Actor Model, it is important to appreciate the nuances of the HTTP/2 Protocol upon which it is based. Afterward we will describe the gRPC Framework - focusing specifically on the gRPC-Java implementation - with the scope to expand this chapter over time to all implementations of gRPC. At the end we will cover examples demonstrating these ideas, by taking a user from the initial steps of how to work with the gRPC-Java framework.
+
+<h3>1 <em>HTTP/2</em></h3>
+
+The HTTP 1.1 protocol has been a success for some time, though there were some key features which began to be requested by the community with the increase of distributed computing, especially in the area of microservices. The phenomenon of creating more modularized functional units that are organically constructed based on a <em>share-nothing model</em> with a bidirectional, high-throughput request and response methodology demands a new protocol for communication and integration. Thus the HTTP/2 was born as a new standard, which is a binary wire protocol providing compressed streams that can be multiplexed for concurrency. As many microservices implementations currently scan header messages before actually processing any payload in order to scale up the processing and routing of messages, HTTP/2 now provides header compression for this purpose. One last important benefit is that the server endpoint can actually push cached resources to the client based on anticipated future communication, dramatically saving client communication time and processing.
+
+<h3>1.1 <em>HTTP/2 Frames</em></h3>
+
+The HTTP/2 protocol is now a framed protocol, which expands the capability for bidirectional, asynchronous communication. Every message is thus part of a frame that will have a header, frame type and stream identifier aside from the standard frame length for processing. Each stream can have a priority, which allows for dependency between streams to be achieved forming a <em>priority tree</em>. The data can be either a request or response which allows for the bidirectional communication, with the capability of flagging the communication for stream termination, flow control with priority settings, continuation and push responses from the server for client confirmation. Below is the format of the HTTP/2 frame {% cite RFC7540 %}:
+
+<p align="center">
+ <img src="figures/http2-frame.png" /><br>
+ <em>Figure 1: The encoding a HTTP/2 frame.</em>
+</p>
+
+<h3>1.2 <em>Header Compression</em></h3>
+
+The HTTP header is one of the primary methods of passing information about the state of other endpoints, the request or response and the payload. This enables endpoints to save time when processing a large quantity to streams, with the ability to forward information along without wasting time to inspect the payload. Since the header information can be quite large, it is possible to now compress the them to allow for better throughput and capacity of stored stateful information.
+
+<h3>1.3 <em>Multiplexed Streams</em></h3>
+
+As streams are core to the implementation of HTTP/2, it is important to discuss the details of their implemenation in the protocol. As many streams can be open simultanously from many endpoints, each stream will be in one of the following states. Each stream is multiplexed together forming a chain of streams that are transmitted over the wire, allowing for asynchronous bi-directional concurrency to be performed by the receiving endpoint. Below is the lifecycle of a stream {% cite RFC7540 %}:
+
+<p align="center">
+ <img src="figures/http2-stream-lifecycle.png" /><br>
+ <em>Figure 2: The lifecycle of a HTTP/2 stream.</em>
+</p>
+
+To better understand this diagram, it is important to define some of the terms in it:
+
+<em>PUSH_PROMISE</em> - This is being performed by one endpoint to alert another that it will be sending some data over the wire.
+
+<em>RST_STREAM</em> - This makes termination of a stream possible.
+
+<em>PRIORITY</em> - This is sent by an endpoint on the priority of a stream.
+
+<em>END_STREAM</em> - This flag denotes the end of a <em>DATA</em> frame.
+
+<em>HEADERS</em> - This frame will open a stream.
+
+<em>Idle</em> - This is a state that a stream can be in when it is opened by receiving a <em>HEADERS</em> frame.
+
+<em>Reserved (Local)</em> - To be in this state is means that one has sent a PUSH_PROMISE frame.
+
+<em>Reserved (Remote)</em> - To be in this state is means that it has been reserved by a remote endpoint.
+
+<em>Open</em> - To be in this state means that both endpoints can send frames.
+
+<em>Closed</em> - This is a terminal state.
+
+<em>Half-Closed (Local)</em> - This means that no frames can be sent except for <em>WINDOW_UPDATE</em>, <em>PRIORITY</em>, and <em>RST_STREAM</em>.
+
+<em>Half-Closed (Remote)</em> - This means that a frame is not used by the remote endpoint to send frames of data.
+
+<h3>1.4 <em>Flow Control of Streams</em></h3>
+
+Since many streams will compete for the bandwidth of a connection, in order to prevent bottlenecks and collisions in the transmission. This is done via the <em>WINDOW_UPDATE</em> payload for every stream - and the overall connection as well - to let the sender know how much room the receiving endpoint has for processing new data.
+
+<h3>2 <em>Protocol Buffers with RPC</em></h3>
+
+Though gRPC was built on top of HTTP/2, an IDL had to be used to perform the communication between endpoints. The natural direction was to use Protocol Buffers is the method of stucturing key-value-based data for serialization between a server and client. At the time of the start of gRPC development only version 2.0 (proto2) was available, which only implemented data structures without any request/response mechanism. An example of a Protocol Buffer data structure would look something like this:
+
+```
+// A message containing the user's name.
+message Hello {
+ string name = 1;
+}
+```
+<p align="center">
+ <em>Figure 3: Protocol Buffer version 2.0 representing a message data-structure.</em>
+</p>
+
+This message will also be encoded for highest compression when sent over the wire. For example, let us say that the message is the string <em>"Hi"</em>. Every Protocol Buffer type has a value, and in this case a string has a value of `2`, as noted in the Table 1 {% cite Protobuf-Types %}.
+
+<p align="center">
+ <img src="figures/protobuf-types.png" /><br>
+ <em>Table 1: Tag values for Protocol Buffer types.</em>
+</p>
+
+One will notice that there is a number associated with each field element in the Protocol Buffer definition, which represents its <em>tag</em>. In Figure 3, the field `name` has a tag of `1`. When a message gets encoded each field (key) will start with a one byte value (8 bits), where the least-significant 3-bit value encode the <em>type</em> and the rest the <em>tag</em>. In this case tag which is `1`, with a type of 2. Thus the encoding will be `00001 010`, which has a hexdecimal value of `A`. The following byte is the length of the string which is `2`, followed by the string as `48` and `69` representing `H` and `i`. Thus the whole tranmission will look as follows:
+
+```
+A 2 48 69
+```
+
+Thus the language had to be updated to support gRPC and the development of a service message with a request and a response definition was added for version version 3.0.0 of Protocol Buffers. The updated implementation would look as follows {% cite HelloWorldProto %}:
+
+```
+// The request message containing the user's name.
+message HelloRequest {
+ string name = 1;
+}
+
+// The response message containing the greetings
+message HelloReply {
+ string message = 1;
+}
+
+// The greeting service definition.
+service Greeter {
+ // Sends a greeting
+ rpc SayHello (HelloRequest) returns (HelloReply) {}
+}
+```
+<p align="center">
+ <em>Figure 4: Protocol Buffer version 3.0.0 representing a message data-structure with the accompanied RPC definition.</em>
+</p>
+
+Notice the addition of a service, where the RPC call would use one of the messages as the structure of a <em>Request</em> with the other being the <em>Response</em> message format.
+
+Once of these Proto file gets generated, one would then use them to compile with gRPC to for generating the <em>Client</em> and <em>Server</em> files representing the classical two endpoints of a RPC implementation.
+
+<h3>3 <em>gRPC</em></h3>
+
+gRPC was built on top of HTTP/2, and we will cover the specifics of gRPC-Java, but expand it to all the implementations with time. gRPC is a cross-platform framework that allows integration across many languages as denoted in Figure 5 {% cite gRPC-Overview %}.
+
+<p align="center">
+ <img src="figures/grpc-cross-language.png" /><br>
+ <em>Figure 5: gRPC allows for asynchronous language-agnostic message passing via Protocol Buffers.</em>
+</p>
+
+To ensure scalability, benchmarks are run on a daily basis to ensure that gRPC performs optimally under high-throughput conditions as illustrated in Figure 6 {% cite gRPC-Benchmark %}.
+
+<p align="center">
+ <img src="figures/grpc-benchmark.png" /><br>
+ <em>Figure 6: Benchmark showing the queries-per-second on two virtual machines with 32 cores each.</em>
+</p>
+
+To standardize, most of the public Google APIs - including the Speech API, Vision API, Bigtable, Pub/Sub, etc. - have been ported to support gRPC, and their definitions can be found at the following location:
+
+<p align="center">
+ <img src="figures/grpc-googleapis.png" /><br>
+ <em>Figure 7: The public Google APIs have been updated for gRPC, and be found at <a href="https://github.com/googleapis/googleapis/tree/master/google">https://github.com/googleapis/googleapis/tree/master/google</a></em>
+</p>
+
+
+<h3>3.1 <em>Supported Languages</em></h3>
+
+The officially supported languages are listed in Table 2 {% cite gRPC-Languages %}.
+
+<p align="center">
+ <img src="figures/grpc-languages.png" /><br>
+ <em>Table 2: Officially supported languages by gRPC.</em>
+</p>
+
+<h3>3.2 <em>Authentication</em></h3>
+
+There are two methods of authentication that are available in gRPC:
+
+* SSL/TLS
+* Google Token (via OAuth2)
+
+gRPC is flexible in that once can plug in their custom authentication system if that is preferred.
+
+<h3>3.3 <em>Development Cycle</em></h3>
+
+In its simplest form gRPC has a structured set of steps one goes about using it, which has this general flow:
+
+<em>1. Download gRPC for the language of interest.</em>
+
+<em>2. Implement the Request and Response definition in a ProtoBuf file.</em>
+
+<em>3. Compile the ProtoBuf file and run the code-generators for the the specific language. This will generate the Client and Server endpoints.</em>
+
+<em>4. Customize the Client and Server code for the desired implementation.</em>
+
+Most of these will require tweaking the Protobuf file and testing the throughput to ensure that the network and CPU capacities are optimally maximized.
+
+<h3>3.4 <em>The gRPC Framework (Stub, Channel and Transport Layer)</em></h3>
+
+One starts by initializing a communication <em>Channel</em> between <em>Client</em> to a <em>Server</em> and storing that as a <em>Stub</em>. The <em>Credentials</em> are provided to the Channel when being initialized. These form a <em>Context</em> for the Client's connection to the Server. Then a <em>Request</em> can be built based on the definition in the Protobuf file. The Request and associated expected<em>Response</em> is executed by the <em>service</em> constructed in the Protobuf file. The Response is them parsed for any data coming from the Channel.
+
+The connection can be asynchronous and bi-directionally streaming so that data is constantly flowing back and available to be read when ready. This allows one to treat the Client and Server as endpoints where one can even adjust not just the flow but also intercept and decoration to filter and thus request and retrieve the data of interest.
+
+The <em>Transport Layer</em> performs the retrieval and placing of binary protocol on the wire. For <em>gRPC-Java</em> has three implementations, though a user can implement their own: <em>Netty, OkHttp, and inProcess.</em>
+
+<h3>3.5 <em>gRPC Java</em></h3>
+
+The Java implementation of gRPC been built with Mobile platform in mind and to provide that capability it requires JDK 6.0 to be supported. Though the core of gRPC is built with data centers in mind - specifically to support C/C++ for the Linux platform - the Java and Go implementations are two very reliable platform to experiment the microservice ecosystem implementations.
+
+There are several moving parts to understanding how gRPC-Java works. The first important step is to ensure that the Client and Server stub inferface code get generated by the Protobuf plugin compiler. This is usually placed in your <em>Gradle</em> build file called `build.gradle` as follows:
+
+```
+ compile 'io.grpc:grpc-netty:1.0.1'
+ compile 'io.grpc:grpc-protobuf:1.0.1'
+ compile 'io.grpc:grpc-stub:1.0.1'
+```
+
+When you build using Gradle, then the appropriate base code gets generated for you, which you can override to build your preferred implementation of the Client and Server.
+
+Since one has to implement the HTTP/2 protocol, the chosen method was to have a <em>Metadata</em> class that will convert the key-value pairs into HTTP/2 Headers and vice-versa for the Netty implementation via <em>GrpcHttp2HeadersDecoder</em> and <em>GrpcHttp2OutboundHeaders</em>.
+
+Another key insight is to understand that the code that handles the HTTP/2 conversion for the Client and the Server are being done via the <em>NettyClientHandler.java</em> and <em>NettyServerHandler.java</em> classes shown in Figures 8 and 9.
+
+<p align="center">
+ <img src="figures/grpc-client-transport-handler.png" /><br>
+ <em>Figure 8: The Client Tranport Handler for gRPC-Java.</em>
+</p>
+
+<p align="center">
+ <img src="figures/grpc-server-transport-handler.png" /><br>
+ <em>Figure 9: The Server Tranport Handler for gRPC-Java.</em>
+</p>
+
+
+<h3>3.5.1 <em>Downloading gRPC Java</em></h3>
+
+The easiest way to download the gRPC-Java implementation is by performing the following command:
+
+```
+git clone -b v1.0.0 https://github.com/grpc/grpc-java.git
+```
+
+Next compile on a Windows machine using Gradle (or Maven) using the following steps - and if you are using any Firewall software it might be necessary to temporarily disable it while compiling gRPC-Java as sockets are used for the tests:
+
+```
+cd grpc-java
+set GRADLE_OPTS=-Xmx2048m
+set JAVA_OPTS=-Xmx2048m
+set DEFAULT_JVM_OPTS="-Dfile.encoding=utf-8"
+echo skipCodegen=true > gradle.properties
+gradlew.bat build -x test
+cd examples
+gradlew.bat installDist
+```
+
+If you are having issues with Unicode (UTF-8) translation when using Git on Windows, you can try the following commands after entering the `examples` folder:
+
+```
+wget https://raw.githubusercontent.com/benelot/grpc-java/feb88a96a4bc689631baec11abe989a776230b74/examples/src/main/java/io/grpc/examples/routeguide/RouteGuideServer.java
+
+copy RouteGuideServer.java src\main\java\io\grpc\examples\routeguide\RouteGuideServer.java
+```
+
+<h3>3.5.2 <em>Running the Hello World Demonstration</em></h3>
+
+Make sure you open two Command (Terminal) windows, each within the `grpc-java\examples\build\install\examples\bin` folder. In the first of the two windows type the following command:
+
+```
+hello-world-server.bat
+```
+
+You should see the following:
+
+<p align="center">
+ <img src="figures/hello-world-server.png" /><br>
+ <em>Figure 10: The Hello World gRPC Server.</em>
+</p>
+
+In the second of the two windows type the following command:
+
+```
+hello-world-client.bat
+```
+
+You should see the following response:
+
+<p align="center">
+ <img src="figures/hello-world-client.png" /><br>
+ <em>Figure 10: The Hello World gRPC Client and the response from the Server.</em>
+</p>
+
+<h3>4 <em>Conclusion</em></h3>
+
+This chapter presented an overview of the concepts behing gRPC, HTTP/2 and will be expanded in both breadth and language implementations. The area of microservices one can see how a server endpoint can actually spawn more endpoints where the message content is the protobuf definition for new endpoints to be generated for load-balancing like for the classical Actor Model.
+
+## References
+
+` `[Apigee]: https://www.youtube.com/watch?v=-2sWDr3Z0Wo
+
+` `[Authentication]: http://www.grpc.io/docs/guides/auth.html
+
+` `[Benchmarks]: http://www.grpc.io/docs/guides/benchmarking.html
+
+` `[CoreSurfaceAPIs]: https://github.com/grpc/grpc/tree/master/src/core
+
+` `[ErrorModel]: http://www.grpc.io/docs/guides/error.html
+
+` `[gRPC]: https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md
+
+` `[gRPC-Companies]: http://www.grpc.io/about/
+
+` `[gRPC-Languages]: http://www.grpc.io/docs/
+
+` `[gRPC-Protos]: https://github.com/googleapis/googleapis/
+
+` `[Netty]: http://netty.io/
+
+` `[RFC7540]: http://httpwg.org/specs/rfc7540.html
+
+` `[HelloWorldProto]: https://github.com/grpc/grpc/blob/master/examples/protos/
+helloworld.proto
+
+` `[Protobuf-Types]: https://developers.google.com/protocol-buffers/docs/encoding
+
+` `[gRPC-Overview]: http://www.grpc.io/docs/guides/
+
+` `[gRPC-Languages]: http://www.grpc.io/about/#osp
+
+` `[gRPC-Benchmark]: http://www.grpc.io/docs/guides/benchmarking.html
diff --git a/chapter/1/rpc.md b/chapter/1/rpc.md
index b4bce84..ccc9739 100644
--- a/chapter/1/rpc.md
+++ b/chapter/1/rpc.md
@@ -1,11 +1,381 @@
---
layout: page
-title: "Remote Procedure Call"
-by: "Joe Schmoe and Mary Jane"
+title: "RPC is Not Dead: Rise, Fall and the Rise of Remote Procedure Calls"
+by: "Muzammil Abdul Rehman and Paul Grosu"
---
-Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file rpc %}
+## Introduction:
+
+*Remote Procedure Call* (RPC) is a design *paradigm* that allow two entities to communicate over a communication channel in a general request-response mechanism. The definition of RPC has mutated and evolved significantly over the past three decades, and therefore RPC *paradigm* is a generic, broadly classifying term to refer to all RPC-esque systems that have arisen over the past four decades. The *definition* of RPC has evolved over the decades. It has moved on from a simple *client-server* design to a group of inter-connected *services*. While the initial RPC *implementations* were designed as tools for outsourcing computation to a server in a distributed system, however, RPC has evolved over the years to build language-agnostic ecosystem of applications. This RPC *paradigm* has been part of the driving force in creating truly revolutionizing distributed systems and giving rise to various communication schemes and protocols between diverse systems.
+
+RPC *paradigm* has been used to implement our every-day systems. From lower level applications like Network File Systems{% cite sunnfs --file rpc %} and Remote Direct Memory Access{% cite rpcoverrdma --file rpc %} to access protocols to developing an ecosystem of microservices, RPC has been used everywhere. RPC has a diverse variety of applications -- SunNFS{% cite sunnfs --file rpc %}, Twitter's Finagle{% cite finagle --file rpc %}, Apache Thrift{% cite thrift --file rpc %}, Java RMI{% cite rmipaper --file rpc %}, SOAP, CORBA{% cite corba --file rpc %} and Google's gRPC{% cite grpc --file rpc %} to name a few.
+
+RPC has evolved over the years. Starting off as a synchronous, insecure, request-response system, RPC has evolved into a secure, asynchronous, resilient *paradigm* that has influenced protocols and programming designs, like, HTTP, REST, and just about anything with a request-response system. It has transitioned to an asynchronous, bidirectional, communication mechanism for connecting services and devices across the internet. While the initial RPC implementations mainly focused on a local, private network with multiple clients communicating with a server and synchronously waiting for the response from the server, modern RPC systems have *endpoints* communicating with each other, asynchronously passing arguments and processing responses, as well having two-way request-response streams(from client to server, and also from server to client). RPC has influenced various design paradigms and communication protocols.
+
+## Remote Procedure Calls:
+
+The *Remote Procedure Call paradigm* can be defined, at a high level, as a set of two communication *endpoints* connected over a network with one endpoint sending a request and the other endpoint generating a response based on that request. In the simplest terms, it's a request-response paradigm where the two *endpoints*/hosts have different *address space*. The host that requests a remote procedure can be referred to as *caller* and the host that responds to this can be referred to as *callee*.
+
+The *endpoints* in the RPC can either be a client and a server, two nodes in a peer-to-peer network, two hosts in a grid computation system, or even two microservices. The RPC communication is not limited to two hosts, rather could have multiple hosts or *endpoints* involved {% cite anycastrpc --file rpc %}.
+
+<p align="center">
+[ Image Source: {% cite rpcimage --file rpc %}]
+</p>
+<figure>
+ <img src="{{ site.baseurl }}/resources/img/rpc_chapter_1_ycog_10_steps.png" alt="RPC in 10 Steps." />
+<p>Fig1. - Remote Procedure Call.</p>
+</figure>
+
+The simplest RPC implementation looks like Fig1. In this case, the *client*(or *caller*) and the *server*(or *callee*) are separated by a physical network. The main components of the system are the client routine/program, the client stub, the server routine/program, the server stub, and the network routines. A *stub* is a small program that is generally used as a stand-in(or an interface) for a larger program{% cite stubrpc --file rpc %}. A *client stub* exposes the functionality provided by the server routine to the client routine while the server stub provides a client-like program to the server routine{% cite rpcimage --file rpc %}. The client stub takes the input arguments from the client program and returns the result, while the server stub provides input arguments to the server program and gets the results. The client program can only interact with the client stub that provides the interface of the remote server to the client. This stub also provides marshalling/pickling/serialization of the input arguments sent to the stub by the client routine. Similarly, the server stub provides a client interface to the server routines as well as the marshalling services.
+
+When a client routine performs a *remote procedure*, it calls the client stub, which serializes the input argument. This serialized data is sent to the server using OS network routines (TCP/IP){% cite rpcimage --file rpc %}. The data is serialized by the server stub, present to the server routines for the given arguments. The return value from the server routines is serialized again and sent over the network back to the client where it's deserialized by the client stub and presented to the client routine. This *remote procedure* is generally hidden from the client routine and it appears as a *local procedure* to the client. RPC services also require a discovery service/host-resolution mechanism to bootstrap the communication between the client and the server.
+
+One important feature of RPC is different *address space* {% cite implementingrpc --file rpc %} for all the endpoints, however, passing the locations to a global storage(Amazon S3, Microsoft Azure, Google Cloud Store) is not impossible. In RPC, all the hosts have separate *address spaces*. They can't share pointers or references to a memory location in one host. This *address space* isolation means that all the information is passed in the messages between the host communicating as a value (objects or variables) but not by reference. Since RPC is a *remote* procedure call, the values sent to the *remote* host cannot be pointers or references to a *local* memory. However, passing links to a global shared memory location is not impossible but rather dependent on the type of system (see *Applications* section for detail).
+
+Originally, RPC was developed as a synchronous request-response mechanism, tied to a specific programming language implementation, with a custom network protocol to outsource computation {% cite implementingrpc --file rpc %}. It had registry system to register all the servers. One of the earliest RPC-based system {% cite implementingrpc --file rpc %} was implemented in the Cedar programming language in early 1980's. The goal of this system was to provide similar programming semantics as local procedure calls. Developed for a LAN network with an inefficient network protocol and a *serialization* scheme to transfer information using the said network protocol, this system aimed at executing a *procedure*(also referred as *method* or a *function*) in a remote *address space*. The single-thread synchronous client and the server were written in an old *Cedar* programming language with a registry system used by the servers to *bind*(or register) their procedures. The clients used this registry system to find a specific server to execute their *remote* procedures. This RPC implementation {% cite implementingrpc --file rpc %} had a very specific use-case. It was built specifically for outsourcing computation between a "Xerox research internetwork", a small, closed, ethernet network with 16-bit addresses{% cite implementingrpc --file rpc %}.
+
+Modern RPC-based systems are language-agnostic, asynchronous, load-balanced systems. Authentication and authorization to these systems have been added as needed along with other security features. Most of these systems have fault-handling built into them as modules and the systems are generally spread all across the internet.
+
+RPC programs have a network (or a communication channel), therefore, they need to handle remote errors and be able to communication information successfully. Error handling generally varies and is categorized as *remote-host* or *network* failure handling. Depending on the type of the system, and the error, the caller (or the callee) return an error and these errors can be handled accordingly. For asynchronous RPC calls, it's possible to specify events to ensure progress.
+
+RPC implementations use a *serialization*(also referred to as *marshalling* or *pickling*) scheme on top of an underlying communication protocol (traditionally TCP over IP). These *serialization* schemes allow both the caller *caller* and *callee* to become language agnostic allowing both these systems to be developed in parallel without any language restrictions. Some examples of serialization schemes are JSON, XML, or Protocol Buffers {% cite grpc --file rpc %}.
+
+Modern RPC systems allow different components of a larger system to be developed independently of one another. The language-agnostic nature combined with a decoupling of some parts of the system allows the two components (caller and callee) to scale separately and add new functionalities. This independent scaling of the system might lead to a mesh of interconnected RPC *services* facilitating one another.
+
+### Examples of RPC
+
+RPC has become very predominant in modern systems. Google even performs orders of 10^10^ RPC calls per second {% cite grpcpersec --file rpc %}. That's *tens of trillions* of RPC calls *every second*. It's more than the *annual GDP of United States* {%cite usgdp --file rpc%}.
+
+In the simplest RPC systems, a client connects to a server over a network connection and performs a *procedure*. This procedure could be as simple as `return "Hello World"` in your favorite programming language. However, the complexity of the of this remote procedure has no upper bound.
+
+Here's the code of this simple RPC server, written in Python3.
+```python
+from xmlrpc.server import SimpleXMLRPCServer
+
+# a simple RPC function that returns "Hello World!"
+def remote_procedure(n):
+ return "Hello World!"
+
+server = SimpleXMLRPCServer(("localhost", 8080))
+print("RPC Server listening on port 8080...")
+server.register_function(remote_procedure, "remote_procedure")
+server.serve_forever()
+```
+
+This code for a simple RPC client for the above server, written in Python3, is as follows.
+
+```python
+import xmlrpc.client
+
+with xmlrpc.client.ServerProxy("http://localhost:8080/") as proxy:
+ print(proxy.remote_procedure())
+```
+
+In the above example, we create a simple function called `remote_procedure` and *bind* it to port *8080* on *localhost*. The RPC client then connects to the server and *request* the `remote_procedure` with no input arguments. The server then *responds* with a return value of the `remote_procedure`.
+
+One can even view the *three-way handshake* as an example of RPC paradigm. The *three-way handshake* is most commonly used in establishing a TCP connection. Here, a server-side application *binds* to a port on the server, and adds a hostname resolution entry is added to a DNS server(can be seen as a *registry* in RPC). Now, when the client has to connect to the server, it requests a DNS server to resolve the hostname to an IP address and the client sends a SYN packet. This SYN packet can be seen as a *request* to another *address space*. The server, upon receiving this, returns a SYN-ACK packet. This SYN-ACK packet from the server can be seen as *response* from the server, as well as a *request* to establish the connection. The client then *responds* with an ACK packet.
+
+## Evolution of RPC:
+
+RPC paradigm was first proposed in 1980’s and still continues as a relevant model of performing distributed computation, which initially was developed for a LAN and now can be implemented on open networks, as web services across the internet. It has had a long and arduous journey to its current state. Here are the three main(overlapping) stages that RPC went through.
+
+### The Rise: All Hail RPC(Early 1970's - Mid 1980's)
+
+RPC started off strong. With RFC 674{% cite rfc674 --file rpc %} and RFC 707{% cite rfc674 rfc707 --file rpc %} coming out and specifying the design of Remote Procedure Calls, followed by Nelson et. al{% cite implementingrpc --file rpc %} coming up with a first RPC implementation for the Cedar programming language, RPC revolutionized systems in general and gave rise to one of the earliest distributed systems(apart from the internet, of course).
+
+With these early achievements, people started using RPC as the defacto design choice. It became a Holy Grail in the systems community for a few years after the first implementation.
+
+### The Fall: RPC is Dead(Late 1970's - Late 1990's)
+
+RPC, despite being an initial success, wasn't without flaws. Within a year of its inception, the limitation of the RPC started to catch up with it. RFC 684 criticized RPC for latency, failures, and the cost. It also focussed on message-passing systems as an alternative to RPC design. Similarly, a few years down the road, in 1988, Tenenbaum et.~al presented similar concerns against RPC {%cite critiqueofrpc --file rpc %}. It talked about problems heterogeneous devices, message passing as an alternative, packet loss, network failure, RPC's synchronous nature, and highlighted that RPC is not a one-size-fits-all model.
+
+In 1994, *A Note on Distributed Computing* was published. This paper claimed RPC to be "fundamentally flawed" {%cite notedistributed --file rpc %}. It talked about a unified object view and cited four main problems with dividing these objects for distributed computing in RPC: communication latency, address space separation, partial failures and concurrency issues(resulting from accessing same remote object by two concurrent client requests). Although most of these problems(except partial failures) were inherently associated with distributed computing itself but partial failures for RPC systems meant that progress might not always be possible in an RPC system.
+
+This era wasn't a dead end for RPC, though. Some of the preliminary designs for modern RPC systems were introduced in this era. Perhaps, the earliest system in this era was SunRPC {% cite sunnfs --file rpc %} used for the Sun Network File System(NFS). Soon to follow SunRPC was CORBA{% cite corba --file rpc %} which was followed by Java RMI{% cite rmipaper --file rpc %}.
+
+However, the initial implementations of these systems were riddled with various issues and design flaws. For instance, Java RMI didn't handle network failures and assumed a reliable network with zero-latency{% cite rmipaper --file rpc %}.
+
+### The Rise, Again: Long Live RPC(Late 1990's - Today)
+
+Despite facing problems in its early days, RPC withstood the test of time. Researchers realized the limitations of RPC and focussed on rectifying and instead of enforcing RPC, they started to use RPC in applications where it was needed. The designer started adding exception-handling, async, network failure handling and heterogeneity between different languages/devices to RPC.
+
+In this era, SunRPC went through various additions and became came to be known as Open Network Computing RPC(ONC RPC). CORBA and RMI have also undergone various modifications as internet standards were set.
+
+A new breed of RPC also started in this era, Async(asynchronous) RPC, giving rise to systems that use *futures* and *promises*, like Finagle{% cite finagle --file rpc %} and Cap'n Proto(post-2010).
+
+
+<p align="center">
+[ Image Source: {% cite norman --file rpc %}]
+</p>
+<figure>
+ <img src="{{ site.baseurl }}/resources/img/rpc_chapter_1_syncrpc.jpg" alt="RPC in 10 Steps." />
+<p>Fig2. - Synchronous RPC.</p>
+</figure>
+
+
+<p align="center">
+[ Image Source: {% cite norman --file rpc %}]
+</p>
+<figure>
+ <img src="{{ site.baseurl }}/resources/img/rpc_chapter_1_asyncrpc.jpg" alt="RPC in 10 Steps." />
+<p>Fig3. - Asynchronous RPC.</p>
+</figure>
+
+
+A traditional, synchronous RPC is a *blocking* operation while an asynchronous RPC is a *non-blocking* operation{%cite dewan --file rpc %}. Fig2. shows a synchronous RPC call while Fig3. shows an asynchronous RPC call. In synchronous RPC, the client sends a request to the server and blocks and waits for the server to perform its computation and return the result. Only after getting the result from the server, the client proceeds onwards. In an asynchronous RPC, the client performs a request to the server and waits only for the acknowledgment of the delivery of input parameters/arguments. After this, the client proceeds onwards and when the server is finished processing, it sends an interrupt to the client. The client receives this message from the server, receives the results, and continues.
+
+Asynchronous RPC makes it possible to separate the remote call from the return value making it possible to write a single-threaded client to handle multiple RPC calls at the specific intervals it needs to process{%cite async --file rpc%}. It also allows easier handling of slow clients/servers as well as transferring large data easily(due to their incremental nature){%cite async --file rpc%}.
+
+In the post-2000 era, MAUI{% cite maui --file rpc %}, Cap'n Proto{% cite capnprotosecure --file rpc %}, gRPC{% cite grpc --file rpc %}, Thrift{% cite thrift --file rpc %} and Finagle{% cite finagle --file rpc %} have been released, which have significantly boosted the widespread use of RPC.
+
+Most of these newer systems came up with their Interface Description Languages(IDLs). These IDLs specified the common protocols and interfacing language that could be used to transfer information clients and servers written in different programming languages, making these RPC implementations language-agnostic. Some of the most common IDLs are JSON, XML, and ProtoBufs.
+
+A high-level overview of some of the most important RPC implementation is as follows.
+
+#### Java Remote Method Invocation
+Java RMI (Java Remote Method Invocation){% cite rmibook --file rpc %} is a Java implementation for performing RPC (Remote Procedure Calls) between a client and a server. The client using a stub passes via a socket connection the information over the network to the server that contains remote objects. The Remote Object Registry (ROR){% cite rmipaper --file rpc %} on the server contains the references to objects that can be accessed remotely and through which the client will connect to. The client then can request the invocation of methods on the server for processing the requested call and then responds with the answer.
+
+RMI provides some security by being encoded but not encrypted, though that can be augmented by tunneling over a secure connection or other methods. Moreover, RMI is very specific to Java. It cannot be used to take advantage of the language-independence feature that is inherent to most RPC implementations. Perhaps the main problem with RMI is that it doesn't provide *access transparency*. This means that a programmer(not the client program) cannot distinguish between the local objects or the remote objects making it relatively difficult handle partial failures in the network{%cite roi --file rpc %}.
+
+#### CORBA
+CORBA (Common Object Request Broker Architecture){% cite corba --file rpc %} was created by the Object Management Group {% cite corbasite --file rpc %} to allow for language-agnostic communication among multiple computers. It is an object-oriented model defined via an Interface Definition Language (IDL) and the communication is managed through an Object Request Broker (ORB). This ORB acts as a broker for objects. CORBA can be viewed as a language-independent RMI system where each client and server have an ORB by which they communicate. The benefits of CORBA is that it allows for multi-language implementations that can communicate with each other, but much of the criticism around CORBA relates to poor consistency among implementations and it's relatively outdated by now. Moreover, CORBA suffers from same access transparency issues as Java RMI.
+
+#### XML-RPC and SOAP
+The XML-RPC specifications {% cite Wiener --file rpc%} performs an HTTP Post request to a server formatted as XML composed of a *header* and *payload* that calls only one method. It was originally released in the late 1990's and unlike RMI, it provides transparency by using HTTP as a transparent mechanism.
+
+The header has to provide the basic information, like user agent and the size of the payload. The payload has to initiate a `methodCall` structure by specifying the name via `methodName` and associated parameter values. Parameters for the method can be scalar, structures or (recursive) arrays. The types of scalar can be one of `i4`, `int`, `boolean`, `string`, `double`, `dateTime.iso8601` or `base64`. The scalars are used to create more complex structures and arrays.
+
+Below is an example as provided by the XML-RPC documentation{% cite Wiener --file rpc%}:
+
+```XML
+
+POST /RPC2 HTTP/1.0
+User-Agent: Frontier/5.1.2 (WinNT)
+Host: betty.userland.com
+Content-Type: text/xml
+Content-length: 181
+
+<?xml version="1.0"?>
+<methodCall>
+ <methodName>examples.getStateName</methodName>
+ <params>
+ <param>
+ <value><i4>41</i4></value>
+ </param>
+ </params>
+ </methodCall>
+```
+
+The response to a request will have the `methodResponse` with `params` and values, or a `fault` with the associated `faultCode` in case of an error {% cite Wiener --file rpc %}:
+
+```XML
+HTTP/1.1 200 OK
+Connection: close
+Content-Length: 158
+Content-Type: text/xml
+Date: Fri, 17 Jul 1998 19:55:08 GMT
+Server: UserLand Frontier/5.1.2-WinNT
+
+<?xml version="1.0"?>
+<methodResponse>
+ <params>
+ <param>
+ <value><string>South Dakota</string></value>
+ </param>
+ </params>
+ </methodResponse>
+```
+
+SOAP (Simple Object Access Protocol) is a successor of XML-RPC as a web-services protocol for communicating between a client and server. It was initially designed by a group at Microsoft {% cite soaparticle1 --file rpc %}. The SOAP message is an XML-formatted message composed of an envelope inside which a header and a payload are provided(just like XML-RPC). The payload of the message contains the request and response of the message, which is transmitted over HTTP or SMTP(unlike XML-RPC).
+
+SOAP can be viewed as the superset of XML-RPC that provides support for more complex authentication schemes{%cite soapvsxml --file rpc %} as well as its support for WSDL(Web Services Description Language), allowing easier discovery and integration with remote web services{%cite soapvsxml --file rpc %}.
+
+The benefit of SOAP is that it provides the flexibility for transmission over multiple transport protocol. The XML-based messages allow SOAP to become language agnostic, though parsing such messages could become a bottleneck.
+
+#### Thrift
+Thrift is an *asynchronous* RPC system created by Facebook and now part of the Apache Foundation {% cite thrift --file rpc %}. It is a language-agnostic Interface Description Language(IDL) by which one generates the code for the client and server. It provides the opportunity for compressed serialization by customizing the protocol and the transport after the description file has been processed.
+
+Perhaps, the biggest advantage of Thrift is that its binary data format has a very low overhead. It has a relatively lower transmission cost(as compared to other alternatives like SOAP){%cite thrifttut --file rpc %} making it very efficient for large amounts of data transfer.
+
+#### Finagle
+Finagle is a fault-tolerant, protocol-agnostic runtime for doing RPC and high-level API for composing futures(see Async RPC section), with RPC calls generated under the hood. It was created by Twitter and is written in Scala to run on a JVM. It is based on three object types: Service objects, Filter objects and Future objects {% cite finagle --file rpc %}.
+
+The Future objects act by asynchronously being requested for a computation that would return a response at some time in the future. These Future objects are the main communication mechanism in Finagle. All the inputs and the output are represented as Future objects.
+
+The Service objects are an endpoint that will return a Future upon processing a request. These Service objects can be viewed as the interfaces used to implement a client or a server.
+
+A sample Finagle Server that reads a request and returns the version of the request is shown below. This example is taken from Finagle documentation{% cite finagletut --file rpc %}
+
+```Scala
+import com.twitter.finagle.{Http, Service}
+import com.twitter.finagle.http
+import com.twitter.util.{Await, Future}
+
+object Server extends App {
+ val service = new Service[http.Request, http.Response] {
+ def apply(req: http.Request): Future[http.Response] =
+ Future.value(
+ http.Response(req.version, http.Status.Ok)
+ )
+ }
+ val server = Http.serve(":8080", service)
+ Await.ready(server)
+}
+```
+
+A Filter object transforms requests for further processing in case additional customization is required from a request. These provide program-independent operations like, timeouts, etc. They take in a Service and provide a new Service object with the applied Filter. Aggregating multiple Filters is alos possible in Finagle.
+
+A sample timeout Filter that takes in a service and creates a new service with timeouts is shown below. This example is taken from Finagle documentation{% cite finagletut --file rpc %}
+
+```Scala
+import com.twitter.finagle.{Service, SimpleFilter}
+import com.twitter.util.{Duration, Future, Timer}
+
+class TimeoutFilter[Req, Rep](timeout: Duration, timer: Timer)
+ extends SimpleFilter[Req, Rep] {
+
+ def apply(request: Req, service: Service[Req, Rep]): Future[Rep] = {
+ val res = service(request)
+ res.within(timer, timeout)
+ }
+}
+```
+
+#### Open Network Computing RPC(ONC RPC)
+ONC was originally introduced as SunRPC {%cite sunnfs --file rpc %} for the Sun NFS. The Sun NFS system had a stateless server, with client-side caching, unique file handlers, and supported NFS read, write, truncate, unlink, etc operations. However, SunRPC was later revised as ONC in 1995 {%cite rfc1831 --file rpc %} and then in 2009 {%cite rfc5531 --file rpc %}. The IDL used in ONC(and SunRPC) is External Data Representation (XDR), a serialization mechanism specific to networks communication and therefore, ONC is limited to applications like Network File Systems.
+
+#### Mobile Assistance Using Infrastructure(MAUI)
+The MAUI project {% cite maui --file rpc %}, developed by Microsoft is a computation offloading system for mobile systems. It's an automated system that offloads a mobile code to a dedicated infrastructure in order to increase the battery life of the mobile, minimize the load on the programmer and perform complex computations offsite. MAUI uses RPC as the communication protocol between the mobile and the infrastructure.
+
+#### gRPC
+
+gRPC is a multiplexed, bi-directional streaming RPC protocol developed Google and Square. The IDL for gRPC is Protocol Buffers(also referred as ProtoBuf) and is meant as a public replacement of Stubby, ARCWire, and Sake {% cite Apigee --file rpc %}. More details on Protocol Buffers, Stubby, ARCWire, and Sake are available in our gRPC chapter{% cite grpcchapter --file rpc %}.
+
+gRPC provides a platform for scalable, bi-directional streaming using both synchronized and asynchronous communication.
+
+In a general RPC mechanism, the client initiates a connection to the server and only the client can *request* while the server can only *respond* to the incoming requests. However, in bi-directional gRPC streams, although the initial connection is initiated by the client(call it *endpoint 1*), once the connection is established, both the server(call it *endpoint 2*) and the *endpoint 1* can send *requests* and receive *responses*. This significantly eases the development where both *endpoints* are communicating with each other(like, grid computing). It also saves the hassle of creating two separate connections between the endpoints (one from *endpoint 1* to *endpoint 2* and another from *endpoint 2* to *endpoint 1*) since both streams are independent.
+
+It multiplexes the requests over a single connection using header compression. This makes it possible for gRPC to be used for mobile clients where battery life and data usage are important.
+The core library is in C -- except for Java and GO -- and surface APIs are implemented for all the other languages connecting through it{% cite CoreSurfaceAPIs --file rpc %}.
+
+Since Protocol Buffers has been utilized by many individuals and companies, gRPC makes it natural to extend their RPC ecosystems via gRPC. Companies like Cisco, Juniper and Netflix {% cite gRPCCompanies --file rpc %} have found it practical to adopt it.
+A majority of the Google Public APIs, like their places and maps APIs, have been ported to gRPC ProtoBuf {% cite gRPCProtos --file rpc %} as well.
+
+More details about gRPC and bi-directional streaming can be found in our gRPC chapter {% cite grpcchapter --file rpc %}
+
+#### Cap'n Proto
+CapnProto{% cite capnprotosecure --file rpc %} is a data interchange RPC system that bypasses data-encoding step(like JSON or ProtoBuf) to significantly improve the performance. It's developed by the original author of gRPC's ProtoBuf, but since it uses bytes(binary data) for encoding/decoding, it outperforms gRPC's ProtoBuf. It uses futures and promises to combine various remote operations into a single operation to save the transportation round-trips. This means if an client calls a function `foo` and then calls another function `bar` on the output of `foo`, Cap'n Proto will aggregate these two operations into a single `bar(foo(x))` where `x` is the input to the function `foo` {% cite capnprotosecure --file rpc %}. This saves multiple roundtrips, especially in object-oriented programs.
+
+### The Heir to the Throne: gRPC or Thrift
+
+Although there are many candidates to be considered as top contenders for RPC throne, most of these are targeted for a specific type of application. ONC is generally specific to the Network File System(though it's being pushed as a standard), Cap'n Proto is relatively new and untested, MAUI is specific to mobile systems, the open-source Finagle is primarily being used at Twitter(not widespread), and the Java RMI simply doesn't even come close due to its transparency issues(sorry to burst your bubble Java fans).
+
+Probably, the most powerful, and practical systems out there are Apache Thrift and Google's gRPC, primarily because these two systems cater to a large number of programming languages, have a significant performance benefit over other techniques and are being actively developed.
+
+Thrift was actually released a few years ago, while the first stable release for gRPC came out in August 2016. However, despite being 'out there', Thrift is currently less popular than gRPC {%cite trendrpcthrift --file rpc %}.
+
+gRPC {% cite gRPCLanguages --file rpc %} and Thrift, both, support most of the popular languages, including Java, C/C++, and Python. Thrift supports other languages, like Ruby, Erlang, Perl, Javascript, Node.js and OCaml while gRPC currently supports Node.js and Go.
+
+The gRPC core is written in C(with the exception of Java and Go) and wrappers are written in other languages to communicate with the core, while the Thrift core is written in C++.
+
+gRPC also provides easier bidrectional streaming communication between the caller and callee. The client generally initiates the communication {% cite gRPCLanguages --file rpc %} and once the connection is established the client and the server can perform reads and writes independently of each other. However, bi-directional streaming in Thrift might be a little difficult to handle, since it focuses explicitly on a client-server model. To enable bidirectional, async streaming, one may have to run two separate systems {%cite grpcbetter --file rpc%}.
+
+Thrift provides exception-handling as a message while the programmer has to handle exceptions in gRPC. In Thrift, exceptions can be returned built into the message, while in gRPC, the programmer explicitly defines this behavior. This Thrift exception-handling makes it easier to write client-side applications.
+
+Although custom authentication mechanisms can be implemented in both these system, gRPC come with a Google-backed authentication using SSL/TLS and Google Tokens {% cite grpcauth --file rpc %}.
+
+Moreover, gRPC-based network communication is done using HTTP/2. HTTP/2 makes it feasible for communicating parties to multiplex network connections using the same port. This is more efficient(in terms of memory usage) as compared to HTTP/1.1. Since gRPC communication is done HTTP/2, it means that gRPC can easily multiplex different services. As for Thrift, multiplexing services is possible, however, due to lack of support from underlying transport protocol, it is performed using a `TMulitplexingProcessor` class(in code) {% cite multiplexingthrift --file rpc %}.
+
+However, both gRPC and Thrift allow async RPC calls. This means that a client can send a request to the server and continue with its execution and the response from the server is processed it arrives.
+
+
+The major comparison between gRPC and Thrift can be summed in this table.
+
+| Comparison | Thrift | gRPC |
+| ----- | ----- | ----- |
+| License | Apache2 | BSD |
+| Sync/Async RPC | Both | Both |
+| Supported Languages | C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, and OCaml | C/C++, Python, Go, Java, Ruby, PHP, C#, Node.js, Objective-C |
+| Core Language | C++| C |
+| Exceptions | Allows being built in the message | Implemented by the programmer |
+| Authentication | Custom | Custom + Google Tokens |
+| Bi-Directionality | Not straightforward | Straightforward |
+| Multiplexing | Possible via `TMulitplexingProcessor` class | Possible via HTTP/2 |
+
+Although, it's difficult to specifically choose one over the other, however, with increasing popularity of gRPC, and the fact that it's still in early stages of development, the general trend{%cite trendrpcthrift --file rpc %} over the past year has started to shift in favor of gRPC and it's giving Thrift a run for its money. Although, it may not be considered as a metric, but the gRPC was searched, on average, three times more as compared to Thrift{%cite trendrpcthrift --file rpc %}.
+
+**Note:** This comparison is performed in December 2016 so the results are expected to change with time.
+
+## Applications:
+
+Since its inception, various papers have been published in applying RPC paradigm to different domains, as well as using RPC implementations to create new systems. Here are some of applications and systems that incorporated RPC.
+
+#### Shared State and Persistence Layer
+
+One major limitation(and the advantage) of RPC is considered the separate *address space* of all the machines in the network. This means that *pointers* or *references* to a data object cannot be passed between the caller and the callee. Therefore, Interweave {% cite interweave2 interweave1 interweave3 --file rpc %} is a *middleware* system that allows scalable sharing of arbitrary datatypes and language-independent processes running on heterogeneous hardware. Interweave is specifically designed and is compatible with RPC-based systems and allows easier access to the shared resources between different applications using memory blocks and locks.
+
+Although research has been done in order to ensure a global shared state for an RPC-based system, However, these systems tend to take away the sense of independence and modularity between the *caller* and the *callee* by using a shared storage instead of a separate *address space*.
+
+#### GridRPC
+
+Grid computing is one of the most widely used applications of RPC paradigm. At a high level, it can be seen as a mesh (or a network) of computers connected with each other to for *grid* such each system can leverage resources from any other system in the network.
+
+In the GridRPC paradigm, each computer in the network can act as the *caller* or the *callee* depending on the amount of resources required {% cite grid1 --file rpc %}. It's also possible for the same computer to act as the *caller* as well as the *callee* for *different* computations.
+
+Some of the most popular implementations that allow one to have GridRPC-compliant middleware are GridSolve{% cite gridsolve1 gridsolve2 --file rpc %} and Ninf-G{% cite ninf --file rpc %}. Ninf is relatively older than GridSolve and was first published in the late 1990's. It's a simple RPC layer that also provides authentication and secure communication between the two parties. GridSolve, on the other hand, is relatively complex and provides a middleware for the communications using a client-agent-server model.
+
+#### Mobile Systems and Computation Offloading
+
+Mobile systems have become very powerful these days. With multi-core processors and gigabytes of RAM, they can undertake relatively complex computations without a hassle. Due to this advancement, they consume a larger amount of energy and hence, their batteries, despite becoming larger, drain quickly with usage. Moreover, mobile data (network bandwidth) is still limited and expensive. Due to these requirements, it's better to offload mobile computations from mobile systems when possible. RPC plays an important role in the communication for this *computation offloading*. Some of these services use Grid RPC technologies to offload this computation. Whereas, other technologies use an RMI(Remote Method Invocation) system for this.
+
+The Ibis Project {% cite ibis --file rpc %} builds an RMI(similar to JavaRMI) and GMI (Group Method Invocation) model to facilitate outsourcing computation. Cuckoo {% cite cuckoo --file rpc %} uses this Ibis communication middleware to offload computation from applications(built using Cuckoo) running on Android smartphones to remote Cuckoo servers.
+
+The Microsoft's MAUI Project {% cite maui --file rpc %} uses RPC communication and allows partitioning of .NET applications and "fine-grained code offload to maximize energy savings with minimal burden on the programmer". MAUI decides the methods to offload to the external MAUI server at runtime.
+
+#### Async RPC, Futures and Promises
+
+Remote Procedure Calls can be asynchronous. Not only that but these async RPCs play in integral role in the *futures* and *promises*. *Future* and *promises* are programming constructs that where a *future* is seen as variable/data/return type/error while a *promise* is seen as a *future* that doesn't have a value, yet. We follow Finagle's {% cite finagle --file rpc %} definition of *futures* and *promises*, where the *promise* of a *future*(an empty *future*) is considered as a *request* while the async fulfillment of this *promise* by a *future* is seen as the *response*. This construct is primarily used for asynchronous programming.
+
+Perhaps the most renowned systems using this type of RPC model are Twitter's Finagle{% cite finagle --file rpc %} and Cap'n Proto{% cite capnprotosecure --file rpc %}.
+
+#### RPC in Microservices Ecosystem:
+
+RPC implementations have moved from a one-server model to multiple servers and on to dynamically-created, load-balanced microservices. RPC started as separate implementations of REST, Streaming RPC, MAUI, gRPC, Cap'n Proto, and has now made it possible for integration of all these implementations as a single abstraction as a user *endpoint*. The endpoints are the building blocks of *microservices*. A *microservice* is usually *service* with a very simple, well-defined purpose, written in almost any language that interacts with other microservices to give the feel of one large monolithic *service*. These microservices are language-agnostic. One *microservice* for airline tickets written in C/C\++, might be communicating with a number of other microservices for individual airlines written in different languages(Python, C\++, Java, Node.js) using a language-agnostic, asynchronous, RPC framework like gRPC{%cite grpc --file rpc %} or Thrift{%cite thrift --file rpc %}.
+
+The use of RPC has allowed us to create new microservices on-the-fly. The microservices can not only created and bootstrapped at runtime but also have inherent features like load-balancing and failure-recovery. This bootstrapping might occur on the same machine, adding to a Docker container {% cite docker --file rpc %}, or across a network (using any combination of DNS, NATs or other mechanisms).
+
+RPC can be defined as the "glue" that holds all the microservices together{% cite microservices1rpc --file rpc %}. This means that RPC is one of the primary communication mechanism between different microservices running on different systems. A microservice requests another microservice to perform an operation/query. The other microservice, upon receiving such request, performs an operation and returns a response. This operation could vary from a simple computation to invoking another microservice creating a series of RPC events to creating new microservices on the fly to dynamically load balance the microservices system. These microservices are language-agnostic. One *microservice* could be written in C/C++, another one could be in different languages(Python, C++, Java, Node.js) and they all might be communicating with each other using a language-agnostic, asynchronous, performant RPC framework like gRPC{%cite grpc --file rpc %} or Thrift{%cite thrift --file rpc %}.
+
+An example of a microservices ecosystem that uses futures/promises is Finagle{%cite finagle --file rpc %} at Twitter.
+
+## Security in RPC:
+
+The initial RPC implementation {% cite implementingrpc --file rpc %} was developed for an isolated LAN network and didn't focus much on security. There're various attack surfaces in that model, from the malicious registry to a malicious server, to a client targeting for Denial-of-Service to Man-in-the-Middle attack between client and server.
+
+As time progressed and internet evolved, new standards came along, and RPC implementations became much more secure. Security, in RPC, is generally added as a *module* or a *package*. These modules have libraries for authentication and authorization of the communication services (caller and callee). These modules are not always bug-free and it's possible to gain unauthorized access to the system. Efforts are being made to rectify these situations by the security in general, using code inspection and bug bounty programs to catch these bugs beforehand. However, with time new bugs arise and this cycle continues. It's a vicious cycle between attackers and security experts, both of whom tries to outdo their opponent.
+
+For example, the Oracle Network File System uses a *Secure RPC*{% cite oraclenfs --file rpc %} to perform authentication in the NFS. This *Secure RPC* uses Diffie-Hellman authentication mechanism with DES encryption to allow only authorized users to access the NFS. Similarly, Cap'n Proto {% cite capnprotosecure --file rpc %} claims that it is resilient to memory leaks, segfaults, and malicious inputs and can be used between mutually untrusting parties. However, in Cap'n Proto "the RPC layer is not robust against resource exhaustion attacks, possibly allowing denials of service", nor has it undergone any formal verification {% cite capnprotosecure --file rpc %}.
+
+Although, it's possible to come up with a *Threat Model* that would make an RPC implementation insecure to use, however, one has to understand that using any distributed system increases the attack surface anyways and claiming one *paradigm* to be more secure than another would be a biased statement, since *paradigms* are generally an idea and it depends on different system designers to use these *paradigms* to build their systems and take care of features specific to real systems, like security and load-balancing. There's always a possibility of rerouting a request to a malicious server(if the registry gets hacked), or there's no trust between the *caller* and *callee*. However, we maintain that RPC *paradigm* is not secure or insecure(for that matter), and that the most secure systems are the ones that are in an isolated environment, disconnected from the public internet with a self-destruct mechanism{% cite selfdest --file rpc %} in place, in an impenetrable bunker, and guarded by the Knights Templar(*they don't exist! Well, maybe Fort Meade comes close*).
+
+## Discussion:
+
+RPC *paradigm* shines the most in *request-response* mechanisms. Futures and Promises also appear to a new breed of RPC. This leads one to question, as to whether every *request-response* system is a modified implementation to of the RPC *paradigm*, or does it actually bring anything new to the table? These modern communication protocols, like HTTP and REST, might just be a different flavor of RPC. In HTTP, a client *requests* a web page(or some other content), the server then *responds* with the required content. The dynamics of this communication might be slightly different from your traditional RPC, however, an HTTP Stateless server adheres to most of the concepts behind RPC *paradigm*. Similarly, consider sending a request to your favorite Google API. Say, you want to translate your latitude/longitude to an address using their Reverse Geocoding API, or maybe want to find out a good restaurant in your vicinity using their Places API, you'll send a *request* to their server to perform a *procedure* that would take a few input arguments, like the coordinates, and return the result. Even though these APIs follow a RESTful design, it appears to be an extension to the RPC *paradigm*.
+
+RPC paradigm has evolved over time. It has evolved to the extent that, currently, it's become very difficult differentiate RPC from non-RPC. With each passing year, the restrictions and limitations of RPC evolve. Current RPC implementations even have the support for the server to *request* information from the client to *respond* to these requests and vice versa (bidirectionality). This *bidirectional* nature of RPCs have transitioned RPC from simple *client-server* model to a set of *endpoints* communicating with each other.
+
+For the past four decades, researchers and industry leaders have tried to come up with *their* definition of RPC. The proponents of RPC paradigm view every *request-response* communication as an implementation the RPC paradigm while those against RPC try to explicitly enumerate the limitations of RPC. These limitations, however, seem to slowly vanish as new RPC models are introduced with time. RPC supporters consider it as the Holy Grail of distributed systems. They view it as the foundation of modern distributed communication. From Apache Thrift and ONC to HTTP and REST, they advocate it all as RPC while REST developers have strong opinions against RPC.
+
+Moreover, with modern global storage mechanisms, the need for RPC systems to have a separate *address space* seems to be slowly dissolving and disappearing into thin air. So, the question remains what *is* RPC and what * is not* RPC? This is an open-ended question. There is no unanimous agreement about what RPC should look like, except that it has communication between two *endpoints*. What we think of RPC is:
+
+*In the world of distributed systems, where every individual component of a system, be it a hard disk, a multi-core processor, or a microservice, is an extension of the RPC, it's difficult to come with a concrete definition of the RPC paradigm. Therefore, anything loosely associated with a request-response mechanism can be considered as RPC.*
+
+<blockquote>
+<p align="center">
+<em>**RPC is not dead, long live RPC!**</em>
+</p>
+</blockquote>
## References
-{% bibliography --file rpc %} \ No newline at end of file
+{% bibliography --file rpc --cited %}
diff --git a/chapter/2/futures.md b/chapter/2/futures.md
index 5c56e92..0075773 100644
--- a/chapter/2/futures.md
+++ b/chapter/2/futures.md
@@ -1,11 +1,605 @@
---
layout: page
title: "Futures"
-by: "Joe Schmoe and Mary Jane"
+by: "Kisalaya Prasad and Avanti Patil"
---
-Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file futures %}
+# Introduction
-## References
+As human beings we have an ability to multitask ie. we can walk, talk and eat at the same time except when you sneeze. Sneeze is like a blocking activity from the normal course of action, because it forces you to stop what you’re doing for a brief moment and then you resume where you left off. Activities like multitasking are called multithreading in computer lingo. In contrast to this behaviour, computer processors are single threaded. So when we say that a computer system has multi-threaded environment, it is actually just an illusion created by processor where processor’s time is shared between multiple processes. Sometimes processor gets blocked when some tasks are hindered from normal execution due to blocking calls. Such blocking calls can range from IO operations like read/write to disk or sending/receiving packets to/from network. Blocking calls can take disproportionate amount of time compared to the processor’s task execution i.e. iterating over a list.
-{% bibliography --file futures %} \ No newline at end of file
+
+The processor can either handle blocking calls in two ways:
+- **Synchronous method**: As a part of running task in synchronous method, processor continues to wait for the blocking call to complete the task and return the result. After this processor will resume processing next task. Problem with this kind of method is CPU time not utilized in an ideal manner.
+- **Asynchronous method**: When you add asynchrony, you can utilize the time of CPU to work on some other task using one of the preemptive time sharing algorithm. This is not blocking the processor at any time and when the asynchronous call returns the result, processor can again switch back to the previous process using preemption and resume the process from the point where it’d left off.
+
+In the world of asynchronous communications many terminologies were defined to help programmers reach the ideal level of resource utilization. As a part of this article we will talk about motivation behind rise of Promises and Futures, how the current notion we have of futures and promises have evolved over time, try to explain various execution models associated with it and finally we will end this discussion with how this construct helps us today in different general purpose programming languages.
+
+
+<figure>
+ <img src="./images/1.png" alt="timeline" />
+</figure>
+
+# Motivation
+
+The rise of promises and futures as a topic of relevance can be traced parallel to the rise of asynchronous or distributed systems. This seems natural, since futures represent a value available in Future which fits in very naturally with the latency which is inherent to these heterogeneous systems. The recent adoption of NodeJS and server side Javascript has only made promises more relevant. But, the idea of having a placeholder for a result came in significantly before than the current notion of futures and promises. As we will see in further sections, this idea of having a *"placeholder for a value that might not be available"* has changed meanings over time.
+
+Thunks can be thought of as a primitive notion of a Future or Promise. According to its inventor P. Z. Ingerman, thunks are "A piece of coding which provides an address". {% cite 23 --file futures %} They were designed as a way of binding actual parameters to their formal definitions in Algol-60 procedure calls. If a procedure is called with an expression in the place of a formal parameter, the compiler generates a thunk which computes the expression and leaves the address of the result in some standard location.
+
+
+The first mention of Futures was by Baker and Hewitt in a paper on Incremental Garbage Collection of Processes. They coined the term - call-by-futures to describe a calling convention in which each formal parameter to a method is bound to a process which evaluates the expression in the parameter in parallel with other parameters. Before this paper, Algol 68 also presented a way to make this kind of concurrent parameter evaluation possible, using the collateral clauses and parallel clauses for parameter binding.
+
+
+In their paper, Baker and Hewitt introduced a notion of Futures as a 3-tuple representing an expression E consisting of (1) A process which evaluates E, (2) A memory location where the result of E needs to be stored, (3) A list of processes which are waiting on E. But, the major focus of their work was not on role of futures and the role they play in Asynchronous distributed computing, and focused on garbage collecting the processes which evaluate expressions not needed by the function.
+
+
+The Multilisp language, presented by Halestead in 1985 built upon this call-by-future with a Future annotation. Binding a variable to a future expression creates a process which evaluates that expression and binds x to a token which represents its (eventual) result. It allowed an operation to move past the actual computation without waiting for it to complete. If the value is never used, the current computation will not pause. MultiLisp also had a lazy future construct, called Delay, which only gets evaluated when the value is first required.
+
+ This design of futures influenced the paper of design of Promises in Argus by Liskov and Shrira in 1988. Both futures in MultiLisp and Promises in Argus provisioned for the result of a call to be picked up later. Building upon the initial design of Future in MultiLisp, they extended the original idea by introducing strongly typed Promises and integration with call streams. Call streams are a language-independent communication mechanism connecting a sender and a receiver in a distributed programming environment. It is used to make calls from sender to receiver like normal RPC. In addition, sender could also make stream-calls where it chooses to not wait for the reply and can make further calls. Stream calls seem like a good use-case for a placeholder to access the result of a call in the future : Promises. Call streams also had provisions for handling network failures. This made it easier to handle exception propagation from callee to the caller and also to handle the typical problems in a multi-computer system. This paper also talked about stream composition. The call-streams could be arranged in pipelines where output of one stream could be used as input on next stream. This notion is not much different to what is known as promise pipelining today, which will be introduced in more details later.
+
+
+E is an object-oriented programming language for secure distributed computing, created by Mark S. Miller, Dan Bornstein, and others at Electric Communities in 1997. One of the major contribution of E was the first non-blocking implementation of Promises. It traces its routes to Joule which was a dataflow programming language. E had an eventually operator, * <- * . This created what is called an eventual send in E : the program doesn't wait for the operation to complete and moves to next sequential statement. Eventual-sends queue a pending delivery and complete immediately, returning a promise. A pending delivery includes a resolver for the promise. Further messages can also be eventually send to a promise before it is resolved. These messages are queued up and forwarded once the promise is resolved. The notion of promise pipelining in E is also inherited from Joule.
+
+
+Among the modern languages, Python was perhaps the first to come up with something on the lines of E’s promises with the Twisted library. Coming out in 2002, it had a concept of Deferred objects, which were used to receive the result of an operation not yet completed. They were just like normal objects and could be passed along, but they didn’t have a value. They supported a callback which would get called once the result of the operation was complete.
+
+
+Promises and javascript have an interesting history. In 2007 inspired by Python’s twisted, dojo came up with it’s own implementation of of dojo.Deferred. This inspired Kris Zyp to then come up with the CommonJS Promises/A spec in 2009. Ryan Dahl introduced the world to NodeJS in the same year. In it’s early versions, Node used promises for the non-blocking API. When NodeJS moved away from promises to its now familiar error-first callback API (the first argument for the callback should be an error object), it left a void for a promises API. Q.js was an implementation of Promises/A spec by Kris Kowal around this time. FuturesJS library by AJ ONeal was another library which aimed to solve flow-control problems without using Promises in the strictest of senses. In 2011, JQuery v1.5 first introduced Promises to its wider and ever-growing audience. The API for JQuery was subtly different than the Promises/A spec. With the rise of HTML5 and different APIs, there came a problem of different and messy interfaces which added to the already infamous callback hell. A+ promises aimed to solve this problem. From this point on, leading from widespread adoption of A+ spec, promises was finally made a part of ECMAScript® 2015 Language Specification. Still, a lack of backward compatibility and additional features provided means that libraries like BlueBird and Q.js still have a place in the javascript ecosystem.
+
+
+# Different Definitions
+
+
+Future, promise, Delay or Deferred generally refer to same synchronisation mechanism where an object acts as a proxy for a yet unknown result. When the result is discovered, promises hold some code which then gets executed.
+
+In some languages however, there is a subtle difference between what is a Future and a Promise.
+“A ‘Future’ is a read-only reference to a yet-to-be-computed value”.
+“A ‘Promise’ is a pretty much the same except that you can write to it as well.”
+
+
+In other words, a future is a read-only window to a value written into a promise. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible. Another way to look at it would be, if you Promise something, you are responsible for keeping it, but if someone else makes a Promise to you, you expect them to honor it in Future.
+
+More technically, in Scala, “SIP-14 – Futures and Promises” defines them as follows:
+A future is a placeholder object for a result that does not yet exist.
+A promise is a writable, single-assignment container, which completes a future. Promises can complete the future with a result to indicate success, or with an exception to indicate failure.
+
+An important difference between Scala and Java (6) futures is that Scala futures were asynchronous in nature. Java's future, at least till Java 6, were blocking. Java 7 introduced the Futures as the asynchronous construct which are more familiar in the distributed computing world.
+
+
+In Java 8, the Future<T> interface has methods to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation when it is complete. CompletableFutures can be thought of as Promises as their value can be set. But it also implements the Future interface and therefore it can be used as a Future too. Promises can be thought of as a future with a public set method which the caller (or anybody else) can use to set the value of the future.
+
+
+In Javascript world, Jquery introduces a notion of Deferred objects which are used to represent a unit of work which is not yet finished. Deferred object contains a promise object which represent the result of that unit of work. Promises are values returned by a function, while the deferred object can be canceled by its caller.
+
+
+C# also makes the distinction between futures and promises. In C#, futures are implemented as Task<T> and in fact in earlier versions of the Task Parallel Library futures were implemented with a class Future<T> which later became Task<T>. The result of the future is available in the readonly property Task<T>.Result which returns T. Tasks are asynchronous in C#.
+
+# Semantics of Execution
+
+Over the years promises and futures have been implemented in different programming languages. Different languages chose to implement futures/promises in a different way. In this section, we try to introduce some different ways in which futures and promises actually get executed and resolved underneath their APIs.
+
+## Thread Pools
+
+Thread pools are a group of ready, idle threads which can be given work. They help with the overhead of worker creation, which can add up in a long running process. The actual implementation may vary everywhere, but what differentiates thread pools is the number of threads it uses. It can either be fixed, or dynamic. Advantage of having a fixed thread pool is that it degrades gracefully : the amount of load a system can handle is fixed, and using fixed thread pool, we can effectively limit the amount of load it is put under. Granularity of a thread pool is the number of threads it instantiates.
+
+
+In Java executor is an object which executes the Runnable tasks. Executors provides a way of abstracting out how the details of how a task will actually run. These details, like selecting a thread to run the task, how the task is scheduled are managed by the object implementing the Executor interface. Threads are an example of a Runnable in java. Executors can be used instead of creating a thread explicitly.
+
+
+Similar to Executor, there is an ExecutionContext as part of scala.concurrent. The basic intent behind it is same as an Executor : it is responsible for executing computations. How it does it can is opaque to the caller. It can create a new thread, use a pool of threads or run it on the same thread as the caller, although the last option is generally not recommended. Scala.concurrent package comes with an implementation of ExecutionContext by default, which is a global static thread pool.
+
+
+ExecutionContext.global is an execution context backed by a ForkJoinPool. ForkJoin is a thread pool implementation designed to take advantage of a multiprocessor environment. What makes fork join unique is that it implements a type of work-stealing algorithm : idle threads pick up work from still busy threads. ForkJoinPool manages a small number of threads, usually limited to the number of processor cores available. It is possible to increase the number of threads, if all of the available threads are busy and wrapped inside a blocking call, although such situation would be highly undesirable for most of the systems. ForkJoin framework work to avoid pool-induced deadlock and minimize the amount of time spent switching between the threads.
+
+
+In Scala, Futures are generally a good framework to reason about concurrency as they can be executed in parallel, waited on, are composable, immutable once written and most importantly, are non blocking (although it is possible to have blocking futures, like Java 6). In Scala, futures (and promises) are based on ExecutionContext. Using ExecutionContext gives users flexibility to implement their own ExecutionContext if they need a specific behavior, like blocking futures. The default ForkJoin pool works well in most of the scenarios.
+
+Scala futures api expects an ExecutionContext to be passed along. This parameter is implicit, and usually ExecutionContext.global. An example :
+
+
+```scala
+implicit val ec = ExecutionContext.global
+val f : Future[String] = Future { “hello world” }
+```
+
+In this example, the global execution context is used to asynchronously run the created future. Taking another example,
+
+
+```scala
+implicit val ec = ExecutionContext.global
+
+val f = Future {
+ Http("http://api.fixed.io/latest?base=USD").asString
+}
+
+f.onComplete {
+ case success(response) => println(response.body)
+ case Failure(t) => println(t)
+}
+```
+
+
+It is generally a good idea to use callbacks with Futures, as the value may not be available when you want to use it.
+
+So, how does it all work together ?
+
+As we mentioned, Futures require an ExecutionContext, which is an implicit parameter to virtually all of the futures API. This ExecutionContext is used to execute the future. Scala is flexible enough to let users implement their own Execution Contexts, but let’s talk about the default ExecutionContext, which is a ForkJoinPool.
+
+
+ForkJoinPool is ideal for many small computations that spawn off and then come back together. Scala’s ForkJoinPool requires the tasks submitted to it to be a ForkJoinTask. The tasks submitted to the global ExecutionContext is quietly wrapped inside a ForkJoinTask and then executed. ForkJoinPool also supports a possibly blocking task, using ManagedBlock method which creates a spare thread if required to ensure that there is sufficient parallelism if the current thread is blocked. To summarize, ForkJoinPool is an really good general purpose ExecutionContext, which works really well in most of the scenarios.
+
+
+## Event Loops
+
+Modern systems typically rely on many other systems to provide the functionality they do. There’s a file system underneath, a database system, and other web services to rely on for the information. Interaction with these components typically involves a period where we’re doing nothing but waiting for the response back. This is single largest waste of computing resources.
+
+
+Javascript is a single threaded asynchronous runtime. Now, conventionally async programming is generally associated with multi-threading, but we’re not allowed to create new threads in Javascript. Instead, asynchronicity in Javascript is achieved using an event-loop mechanism.
+
+
+Javascript has historically been used to interact with the DOM and user interactions in the browser, and thus an event-driven programming model was a natural fit for the language. This has scaled up surprisingly well in high throughput scenarios in NodeJS.
+
+
+The general idea behind event-driven programming model is that the logic flow control is determined by the order in which events are processed. This is underpinned by a mechanism which is constantly listening for events and fires a callback when it is detected. This is the Javascript’s event loop in a nutshell.
+
+
+A typical Javascript engine has a few basic components. They are :
+- **Heap**
+Used to allocate memory for objects
+- **Stack**
+Function call frames go into a stack from where they’re picked up from top to be executed.
+- **Queue**
+ A message queue holds the messages to be processed.
+
+
+Each message has a callback function which is fired when the message is processed. These messages can be generated by user actions like button clicks or scrolling, or by actions like HTTP requests, request to a database to fetch records or reading/writing to a file.
+
+
+Separating when a message is queued from when it is executed means the single thread doesn’t have to wait for an action to complete before moving on to another. We attach a callback to the action we want to do, and when the time comes, the callback is run with the result of our action. Callbacks work good in isolation, but they force us into a continuation passing style of execution, what is otherwise known as Callback hell.
+
+
+```javascript
+
+getData = function(param, callback){
+ $.get('http://example.com/get/'+param,
+ function(responseText){
+ callback(responseText);
+ });
+}
+
+getData(0, function(a){
+ getData(a, function(b){
+ getData(b, function(c){
+ getData(c, function(d){
+ getData(d, function(e){
+
+ });
+ });
+ });
+ });
+});
+
+```
+
+<center><h4> VS </h4></center>
+
+```javascript
+
+getData = function(param, callback){
+ return new Promise(function(resolve, reject) {
+ $.get('http://example.com/get/'+param,
+ function(responseText){
+ resolve(responseText);
+ });
+ });
+}
+
+getData(0).then(getData)
+ .then(getData).
+ then(getData).
+ then(getData);
+
+
+```
+
+> **Programs must be written for people to read, and only incidentally for machines to execute.** - *Harold Abelson and Gerald Jay Sussman*
+
+
+Promises are an abstraction which make working with async operations in javascript much more fun. Callbacks lead to inversion of control, which is difficult to reason about at scale. Moving on from a continuation passing style, where you specify what needs to be done once the action is done, the callee simply returns a Promise object. This inverts the chain of responsibility, as now the caller is responsible for handling the result of the promise when it is settled.
+
+The ES2015 spec specifies that “promises must not fire their resolution/rejection function on the same turn of the event loop that they are created on.” This is an important property because it ensures deterministic order of execution. Also, once a promise is fulfilled or failed, the promise’s value MUST not be changed. This ensures that a promise cannot be resolved more than once.
+
+Let’s take an example to understand the promise resolution workflow as it happens inside the Javascript Engine.
+
+Suppose we execute a function, here g() which in turn, calls function f(). Function f returns a promise, which, after counting down for 1000 ms, resolves the promise with a single value, true. Once f gets resolved, a value true or false is alerted based on the value of the promise.
+
+
+<figure>
+ <img src="./images/5.png" alt="timeline" />
+</figure>
+
+Now, javascript’s runtime is single threaded. This statement is true, and not true. The thread which executes the user code is single threaded. It executes what is on top of the stack, runs it to completion, and then moves onto what is next on the stack. But, there are also a number of helper threads which handle things like network or timer/settimeout type events. This timing thread handles the counter for setTimeout.
+
+<figure>
+ <img src="./images/6.png" alt="timeline" />
+</figure>
+
+Once the timer expires, the timer thread puts a message on the message queue. The queued up messages are then handled by the event loop. The event loop as described above, is simply an infinite loop which checks if a message is ready to be processed, picks it up and puts it on the stack for it’s callback to be executed.
+
+<figure>
+ <img src="./images/7.png" alt="timeline" />
+</figure>
+
+Here, since the future is resolved with a value of true, we are alerted with a value true when the callback is picked up for execution.
+
+<figure>
+ <img src="./images/8.png" alt="timeline" />
+</figure>
+
+Some finer details :
+We’ve ignored the heap here, but all the functions, variables and callbacks are stored on heap.
+As we’ve seen here, even though Javascript is said to be single threaded, there are number of helper threads to help main thread do things like timeout, UI, network operations, file operations etc.
+Run-to-completion helps us reason about the code in a nice way. Whenever a function starts, it needs to finish before yielding the main thread. The data it accesses cannot be modified by someone else. This also means every function needs to finish in a reasonable amount of time, otherwise the program seems hung. This makes Javascript well suited for I/O tasks which are queued up and then picked up when finished, but not for data processing intensive tasks which generally take long time to finish.
+We haven’t talked about error handling, but it gets handled the same exact way, with the error callback being called with the error object the promise is rejected with.
+
+
+Event loops have proven to be surprisingly performant. When network servers are designed around multithreading, as soon as you end up with a few hundred concurrent connections, the CPU spends so much of its time task switching that you start to lose overall performance. Switching from one thread to another has overhead which can add up significantly at scale. Apache used to choke even as low as a few hundred concurrent users when using a thread per connection while Node can scale up to a 100,000 concurrent connections based on event loops and asynchronous IO.
+
+
+## Thread Model
+
+
+Oz programming language introduced an idea of dataflow concurrency model. In Oz, whenever the program comes across an unbound variable, it waits for it to be resolved. This dataflow property of variables helps us write threads in Oz that communicate through streams in a producer-consumer pattern. The major benefit of dataflow based concurrency model is that it’s deterministic - same operation called with same parameters always produces the same result. It makes it a lot easier to reason about concurrent programs, if the code is side-effect free.
+
+
+Alice ML is a dialect of Standard ML with support for lazy evaluation, concurrent, distributed, and constraint programming. The early aim of Alice project was to reconstruct the functionalities of Oz programming language on top of a typed programming language. Building on the Standard ML dialect, Alice also provides concurrency features as part of the language through the use of a future type. Futures in Alice represent an undetermined result of a concurrent operation. Promises in Alice ML are explicit handles for futures.
+
+
+Any expression in Alice can be evaluated in it's own thread using spawn keyword. Spawn always returns a future which acts as a placeholder for the result of the operation. Futures in Alice ML can be thought of as functional threads, in a sense that threads in Alice always have a result. A thread is said to be touching a future if it performs an operation that requires the value future is a placeholder for. All threads touching a future are blocked until the future is resolved. If a thread raises an exception, the future is failed and this exception is re-raised in the threads touching it. Futures can also be passed along as values. This helps us achieve the dataflow model of concurrency in Alice.
+
+
+Alice also allows for lazy evaluation of expressions. Expressions preceded with the lazy keyword are evaluated to a lazy future. The lazy future is evaluated when it is needed. If the computation associated with a concurrent or lazy future ends with an exception, it results in a failed future. Requesting a failed future does not block, it simply raises the exception that was the cause of the failure.
+
+# Implicit vs. Explicit Promises
+
+
+We define Implicit promises as ones where we don’t have to manually trigger the computation vs Explicit promises where we have to trigger the resolution of future manually, either by calling a start function or by requiring the value. This distinction can be understood in terms of what triggers the calculation : With Implicit promises, the creation of a promise also triggers the computation, while with Explicit futures, one needs to triggers the resolution of a promise. This trigger can in turn be explicit, like calling a start method, or implicit, like lazy evaluation where the first use of a promise’s value triggers its evaluation.
+
+
+The idea for explicit futures were introduced in the Baker and Hewitt paper. They’re a little trickier to implement, and require some support from the underlying language, and as such they aren’t that common. The Baker and Hewitt paper talked about using futures as placeholders for arguments to a function, which get evaluated in parallel, but when they’re needed. MultiLisp also had a mechanism to delay the evaluation of the future to the time when it's value is first used, using the defer construct. Lazy futures in Alice ML have a similar explicit invocation mechanism, the first thread touching a future triggers its evaluation.
+
+An example for Explicit Futures would be (from AliceML):
+
+```
+fun enum n = lazy n :: enum (n+1)
+
+```
+
+This example generates an infinite stream of integers and if stated when it is created, will compete for the system resources.
+
+Implicit futures were introduced originally by Friedman and Wise in a paper in 1978. The ideas presented in that paper inspired the design of promises in MultiLisp. Futures are also implicit in Scala and Javascript, where they’re supported as libraries on top of the core languages. Implicit futures can be implemented this way as they don’t require support from language itself. Alice ML’s concurrent futures are also an example of implicit invocation.
+
+For example
+
+```scala
+
+val f = Future {
+ Http("http://api.fixer.io/latest?base=USD").asString
+}
+
+f onComplete {
+ case Success(response) => println(response.body)
+ case Failure(t) => println(t)
+}
+
+```
+
+This sends the HTTP call as soon as it the Future is created. In Scala, although the futures are implicit, Promises can be used to have an explicit-like behavior. This is useful in a scenario where we need to stack up some computations and then resolve the Promise.
+
+An Example :
+
+```scala
+
+val p = Promise[Foo]()
+
+p.future.map( ... ).filter( ... ) foreach println
+
+p.complete(new Foo)
+
+```
+
+Here, we create a Promise, and complete it later. In between we stack up a set of computations which get executed once the promise is completed.
+
+
+# Promise Pipelining
+
+One of the criticism of traditional RPC systems would be that they’re blocking. Imagine a scenario where you need to call an API ‘a’ and another API ‘b’, then aggregate the results of both the calls and use that result as a parameter to another API ‘c’. Now, the logical way to go about doing this would be to call A and B in parallel, then once both finish, aggregate the result and call C. Unfortunately, in a blocking system, the way to go about is call a, wait for it to finish, call b, wait, then aggregate and call c. This seems like a waste of time, but in absence of asynchronicity, it is impossible. Even with asynchronicity, it gets a little difficult to manage or scale up the system linearly. Fortunately, we have promises.
+
+
+<figure>
+ <img src="./images/p-1.png" alt="timeline" />
+</figure>
+
+<figure>
+ <img src="./images/p-2.png" alt="timeline" />
+</figure>
+
+Futures/Promises can be passed along, waited upon, or chained and joined together. These properties helps make life easier for the programmers working with them. This also reduces the latency associated with distributed computing. Promises enable dataflow concurrency, which is also deterministic, and easier to reason.
+
+The history of promise pipelining can be traced back to the call-streams in Argus. In Argus, Call streams are a mechanism for communication between distributed components. The communicating entities, a sender and a receiver are connected by a stream, and sender can make calls to receiver over it. Streams can be thought of as RPC, except that these allow callers to run in parallel with the receiver while processing the call. When making a call in Argus, the caller receives a promise for the result. In the paper on Promises by Liskov and Shrira, they mention that having integrated Promises into call streams, next logical step would be to talk about stream composition. This means arranging streams into pipelines where output of one stream can be used as input of the next stream. They talk about composing streams using fork and coenter.
+
+Channels in Joule were a similar idea, providing a channel which connects an acceptor and a distributor. Joule was a direct ancestor to E language, and talked about it in more detail.
+
+```
+
+t3 := (x <- a()) <- c(y <- b())
+
+t1 := x <- a()
+t2 := y <- b()
+t3 := t1 <- c(t2)
+
+```
+
+Without pipelining in E, this call will require three round trips. First to send a() to x, then b() to y then finally c to the result t1 with t2 as an argument. But with pipelining, the later messages can be sent with promises as result of earlier messages as argument. This allowed sending all the messages together, thereby saving the costly round trips. This is assuming x and y are on the same remote machine, otherwise we can still evaluate t1 and t2 parallely.
+
+
+Notice that this pipelining mechanism is different from asynchronous message passing, as in asynchronous message passing, even if t1 and t2 get evaluated in parallel, to resolve t3 we still wait for t1 and t2 to be resolved, and send it again in another call to the remote machine.
+
+
+Modern promise specifications, like one in Javascript comes with methods which help working with promise pipelining easier. In javascript, a Promises.all method is provided, which takes in an iterable and returns a new Promise which gets resolved when all the promises in the iterable get resolved. There’s also a race method, which returns a promise which is resolved when the first promise in the iterable gets resolved.
+
+```javascript
+
+var a = Promise.resolve(1);
+var b = new Promise(function (resolve, reject) {
+ setTimeout(resolve, 100, 2);
+});
+
+Promise.all([p1, p2]).then(values => {
+ console.log(values); // [1,2]
+});
+
+Promise.race([p1, p2]).then(function(value) {
+ console.log(value); // 1
+});
+
+```
+
+In Scala, futures have a onSuccess method which acts as a callback to when the future is complete. This callback itself can be used to sequentially chain futures together. But this results in bulkier code. Fortunately, Scala api comes with combinators which allow for easier combination of results from futures. Examples of combinators are map, flatmap, filter, withFilter.
+
+
+# Handling Errors
+
+If world would have run without errors we would rejoice in unison, but it is not the case in programming world as well. When you run a program you either receive an expected output or an error. Error can be defined as wrong output or an exception. In a synchronous programming model, the most logical way of handling errors is a try...catch block.
+
+```javascript
+
+try{
+ do something1;
+ do something2;
+ do something3;
+ ...
+} catch ( exception ){
+ HandleException;
+}
+
+```
+
+Unfortunately, the same thing doesn’t directly translate to asynchronous code.
+
+
+```javascript
+
+foo = doSomethingAsync();
+
+try{
+ foo();
+ // This doesn’t work as the error might not have been thrown yet
+} catch ( exception ){
+ handleException;
+}
+
+
+```
+
+
+
+Although most of the earlier papers did not talk about error handling, the Promises paper by Liskov and Shrira did acknowledge the possibility of failure in a distributed environment. To put this in Argus's perspective, the 'claim' operation waits until the promise is ready. Then it returns normally if the call terminated normally, and otherwise it signals the appropriate 'exception', e.g.,
+
+```
+y: real := pt$claim(x)
+ except when foo: ...
+ when unavailable(s: string): .
+ when failure(s: string): . .
+ end
+
+```
+Here x is a promise object of type pt; the form pi$claim illustrates the way Argus identifies an operation of a type by concatenating the type name with the operation name. When there are communication problems, RPCs in Argus terminate either with the 'unavailable' exception or the 'failure' exception.
+'Unavailable' - means that the problem is temporary, e.g., communication is impossible right now.
+'Failure' - means that the problem is permanent, e.g., the handler’s guardian does not exist.
+Thus stream calls (and sends) whose replies are lost because of broken streams will terminate with one of these exceptions. Both exceptions have a string argument that explains the reason for the failure, e.g., future(“handler does not exist”), or unavailable(“cannot communicate”). Since any call can fail, every handler can raise the exceptions failure and unavailable. In this paper they also talked about propagation of exceptions from the called procedure to the caller. In paper about E language they talk about broken promises and setting a promise to the exception of broken references.
+
+In modern languages like Scala, Promises generally come with two callbacks. One to handle the success case and other to handle the failure. e.g.
+
+```scala
+
+f onComplete {
+ case Success(data) => handleSuccess(data)
+ case Failure(e) => handleFailure(e)
+}
+```
+
+In Scala, the Try type represents a computation that may either result in an exception, or return a successfully computed value. For example, Try[Int] represents a computation which can either result in Int if it's successful, or return a Throwable if something is wrong.
+
+```scala
+
+val a: Int = 100
+val b: Int = 0
+def divide: Try[Int] = Try(a/b)
+
+divide match {
+ case Success(v) =>
+ println(v)
+ case Failure(e) =>
+ println(e) // java.lang.ArithmeticException: / by zero
+}
+
+```
+
+Try type can be pipelined, allowing for catching exceptions and recovering from them along the way.
+
+#### In Javascript
+```javascript
+
+promise.then(function (data) {
+ // success callback
+ console.log(data);
+}, function (error) {
+ // failure callback
+ console.error(error);
+});
+
+```
+Scala futures exception handling:
+
+When asynchronous computations throw unhandled exceptions, futures associated with those computations fail. Failed futures store an instance of Throwable instead of the result value. Futures provide the onFailure callback method, which accepts a PartialFunction to be applied to a Throwable. TimeoutException, scala.runtime.NonLocalReturnControl[] and ExecutionException exceptions are treated differently
+
+Scala promises exception handling:
+
+When failing a promise with an exception, three subtypes of Throwables are handled specially. If the Throwable used to break the promise is a scala.runtime.NonLocalReturnControl, then the promise is completed with the corresponding value. If the Throwable used to break the promise is an instance of Error, InterruptedException, or scala.util.control.ControlThrowable, the Throwable is wrapped as the cause of a new ExecutionException which, in turn, is failing the promise.
+
+
+To handle errors with asynchronous methods and callbacks, the error-first callback style ( which we've seen before, also adopted by Node) is the most common convention. Although this works, but it is not very composable, and eventually takes us back to what is called callback hell. Fortunately, Promises allow asynchronous code to apply structured error handling. Promises .then method takes in two callbacks, a onFulfilled to handle when a promise is resolved successfully and a onRejected to handle if the promise is rejected.
+
+```javascript
+
+var p = new Promise(function(resolve, reject){
+ resolve(100);
+});
+
+p.then(function(data){
+ console.log(data); // 100
+},function(error){
+ console.err(error);
+});
+
+var q = new Promise(function(resolve, reject){
+ reject(new Error(
+ {'message':'Divide by zero'}
+ ));
+});
+
+q.then(function(data){
+ console.log(data);
+},function(error){
+ console.err(error);// {'message':'Divide by zero'}
+});
+
+```
+
+
+Promises also have a catch method, which work the same way as onFailure callback, but also help deal with errors in a composition. Exceptions in promises behave the same way as they do in a synchronous block of code : they jump to the nearest exception handler.
+
+
+```javascript
+function work(data) {
+ return Promise.resolve(data+"1");
+}
+
+function error(data) {
+ return Promise.reject(data+"2");
+}
+
+function handleError(error) {
+ return error +"3";
+}
+
+
+work("")
+.then(work)
+.then(error)
+.then(work) // this will be skipped
+.then(work, handleError)
+.then(check);
+
+function check(data) {
+ console.log(data == "1123");
+ return Promise.resolve();
+}
+
+```
+
+The same behavior can be written using catch block.
+
+```javascript
+
+work("")
+.then(work)
+.then(error)
+.then(work)
+.catch(handleError)
+.then(check);
+
+function check(data) {
+ console.log(data == "1123");
+ return Promise.resolve();
+}
+
+```
+
+
+# Futures and Promises in Action
+
+
+## Twitter Finagle
+
+
+Finagle is a protocol-agnostic, asynchronous RPC system for the JVM that makes it easy to build robust clients and servers in Java, Scala, or any JVM-hosted language. It uses Futures to encapsulate concurrent tasks. Finagle
+introduces two other abstractions built on top of Futures to reason about distributed software :
+
+- ** Services ** are asynchronous functions which represent system boundaries.
+
+- ** Filters ** are application-independent blocks of logic like handling timeouts and authentication.
+
+In Finagle, operations describe what needs to be done, while the actual execution is left to be handled by the runtime. The runtime comes with a robust implementation of connection pooling, failure detection and recovery and load balancers.
+
+Example of a Service:
+
+
+```scala
+
+val service = new Service[HttpRequest, HttpResponse] {
+ def apply(request: HttpRequest) =
+ Future(new DefaultHttpResponse(HTTP_1_1, OK))
+}
+
+```
+A timeout filter can be implemented as :
+
+```scala
+
+def timeoutFilter(d: Duration) =
+ { (req, service) => service(req).within(d) }
+
+```
+
+
+## Correctables
+Correctables were introduced by Rachid Guerraoui, Matej Pavlovic, and Dragos-Adrian Seredinschi at OSDI ‘16, in a paper titled Incremental Consistency Guarantees for Replicated Objects. As the title suggests, Correctables aim to solve the problems with consistency in replicated objects. They provide incremental consistency guarantees by capturing successive changes to the value of a replicated object. Applications can opt to receive a fast but possibly inconsistent result if eventual consistency is acceptable, or to wait for a strongly consistent result. Correctables API draws inspiration from, and builds on the API of Promises. Promises have a two state model to represent an asynchronous task, it starts in blocked state and proceeds to a ready state when the value is available. This cannot represent the incremental nature of correctables. Instead, Correctables have a updating state when it starts. From there on, it remains in updating state during intermediate updates, and when the final result is available, it transitions to final state. If an error occurs in between, it moves into an error state. Each state change triggers a callback.
+
+<figure>
+ <img src="./images/15.png" alt="timeline" />
+</figure>
+
+
+## Folly Futures
+Folly is a library by Facebook for asynchronous C++ inspired by the implementation of Futures by Twitter for Scala. It builds upon the Futures in the C++11 Standard. Like Scala’s futures, they also allow for implementing a custom executor which provides different ways of running a Future (thread pool, event loop etc).
+
+
+## NodeJS Fiber
+Fibers provide coroutine support for v8 and node. Applications can use Fibers to allow users to write code without using a ton of callbacks, without sacrificing the performance benefits of asynchronous IO. Think of fibers as light-weight threads for NodeJs where the scheduling is in the hands of the programmer. The node-fibers library doesn’t recommend using raw API and code together without any abstractions, and provides a Futures implementation which is ‘fiber-aware’.
+
+
+# References
+
+{% bibliography --file futures %}
diff --git a/chapter/2/images/1.png b/chapter/2/images/1.png
new file mode 100644
index 0000000..569c326
--- /dev/null
+++ b/chapter/2/images/1.png
Binary files differ
diff --git a/chapter/2/images/15.png b/chapter/2/images/15.png
new file mode 100644
index 0000000..15a2a81
--- /dev/null
+++ b/chapter/2/images/15.png
Binary files differ
diff --git a/chapter/2/images/5.png b/chapter/2/images/5.png
new file mode 100644
index 0000000..b86de04
--- /dev/null
+++ b/chapter/2/images/5.png
Binary files differ
diff --git a/chapter/2/images/6.png b/chapter/2/images/6.png
new file mode 100644
index 0000000..aaafdbd
--- /dev/null
+++ b/chapter/2/images/6.png
Binary files differ
diff --git a/chapter/2/images/7.png b/chapter/2/images/7.png
new file mode 100644
index 0000000..7183fb6
--- /dev/null
+++ b/chapter/2/images/7.png
Binary files differ
diff --git a/chapter/2/images/8.png b/chapter/2/images/8.png
new file mode 100644
index 0000000..d6d2e0e
--- /dev/null
+++ b/chapter/2/images/8.png
Binary files differ
diff --git a/chapter/2/images/9.png b/chapter/2/images/9.png
new file mode 100644
index 0000000..1b67a45
--- /dev/null
+++ b/chapter/2/images/9.png
Binary files differ
diff --git a/chapter/2/images/p-1.png b/chapter/2/images/p-1.png
new file mode 100644
index 0000000..7061fe3
--- /dev/null
+++ b/chapter/2/images/p-1.png
Binary files differ
diff --git a/chapter/2/images/p-1.svg b/chapter/2/images/p-1.svg
new file mode 100644
index 0000000..87e180b
--- /dev/null
+++ b/chapter/2/images/p-1.svg
@@ -0,0 +1,4 @@
+<?xml version="1.0" standalone="yes"?>
+
+<svg version="1.1" viewBox="0.0 0.0 720.0 540.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l720.0 0l0 540.0l-720.0 0l0 -540.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l720.0 0l0 540.0l-720.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m273.45404 246.13159l163.37006 -17.354324" fill-rule="nonzero"></path><path stroke="#93c47d" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m276.86194 245.76958l159.96216 -16.99231" fill-rule="evenodd"></path><path fill="#93c47d" stroke="#93c47d" stroke-width="1.0" stroke-linecap="butt" d="m276.86194 245.76959l0.9995117 -1.2370911l-2.9537048 1.4446716l3.1912842 0.7919159z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m40.0 88.13911l613.98425 0l0 44.000008l-613.98425 0z" fill-rule="nonzero"></path><path fill="#000000" d="m151.20563 109.902855q0 0.34375 -0.015625 0.578125q-0.015625 0.234375 -0.03125 0.4375l-6.53125 0q0 1.4375 0.796875 2.203125q0.796875 0.765625 2.296875 0.765625q0.40625 0 0.8125 -0.03125q0.40625 -0.046875 0.78125 -0.09375q0.390625 -0.0625 0.734375 -0.125q0.34375 -0.078125 0.640625 -0.15625l0 1.328125q-0.65625 0.1875 -1.484375 0.296875q-0.828125 0.125 -1.71875 0.125q-1.203125 0 -2.0625 -0.328125q-0.859375 -0.328125 -1.421875 -0.9375q-0.546875 -0.625 -0.8125 -1.515625q-0.265625 -0.890625 -0.265625 -2.03125q0 -0.984375 0.28125 -1.859375q0.296875 -0.875 0.828125 -1.53125q0.546875 -0.671875 1.328125 -1.0625q0.796875 -0.390625 1.796875 -0.390625q0.984375 0 1.734375 0.3125q0.75 0.296875 1.265625 0.859375q0.515625 0.5625 0.78125 1.375q0.265625 0.796875 0.265625 1.78125zm-1.6875 -0.21875q0.03125 -0.625 -0.125 -1.140625q-0.140625 -0.515625 -0.453125 -0.890625q-0.3125 -0.375 -0.78125 -0.578125q-0.453125 -0.203125 -1.078125 -0.203125q-0.515625 0 -0.953125 0.203125q-0.4375 0.203125 -0.765625 0.578125q-0.3125 0.359375 -0.5 0.890625q-0.1875 0.515625 -0.234375 1.140625l4.890625 0zm12.460419 5.375l-2.140625 0l-2.515625 -3.546875l-2.484375 3.546875l-2.078125 0l3.609375 -4.671875l-3.453125 -4.640625l2.078125 0l2.4375 3.578125l2.40625 -3.578125l2.0 0l-3.5 4.671875l3.640625 4.640625zm9.819794 -4.828125q0 1.25 -0.34375 2.1875q-0.34375 0.921875 -0.953125 1.53125q-0.609375 0.609375 -1.453125 0.921875q-0.828125 0.296875 -1.8125 0.296875q-0.4375 0 -0.890625 -0.046875q-0.4375 -0.046875 -0.890625 -0.15625l0 3.890625l-1.609375 0l0 -13.109375l1.4375 0l0.109375 1.5625q0.6875 -0.96875 1.46875 -1.34375q0.796875 -0.390625 1.71875 -0.390625q0.796875 0 1.390625 0.34375q0.609375 0.328125 1.015625 0.9375q0.40625 0.609375 0.609375 1.46875q0.203125 0.859375 0.203125 1.90625zm-1.640625 0.078125q0 -0.734375 -0.109375 -1.34375q-0.109375 -0.609375 -0.34375 -1.046875q-0.234375 -0.4375 -0.59375 -0.6875q-0.359375 -0.25 -0.859375 -0.25q-0.3125 0 -0.625 0.109375q-0.3125 0.09375 -0.65625 0.328125q-0.328125 0.21875 -0.703125 0.59375q-0.375 0.375 -0.8125 0.9375l0 4.515625q0.453125 0.1875 0.9375 0.296875q0.5 0.09375 0.96875 0.09375q1.3125 0 2.046875 -0.875q0.75 -0.890625 0.75 -2.671875zm21.936462 -1.25l-7.984375 0l0 -1.359375l7.984375 0l0 1.359375zm0 3.234375l-7.984375 0l0 -1.359375l7.984375 0l0 1.359375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m210.85876 115.059105l-0.03125 -1.25q-0.765625 0.75 -1.546875 1.09375q-0.78125 0.328125 -1.65625 0.328125q-0.796875 0 -1.359375 -0.203125q-0.5625 -0.203125 -0.9375 -0.5625q-0.359375 -0.359375 -0.53125 -0.84375q-0.171875 -0.484375 -0.171875 -1.046875q0 -1.40625 1.046875 -2.1875q1.046875 -0.796875 3.078125 -0.796875l1.9375 0l0 -0.828125q0 -0.8125 -0.53125 -1.3125q-0.53125 -0.5 -1.609375 -0.5q-0.796875 0 -1.5625 0.1875q-0.765625 0.171875 -1.578125 0.484375l0 -1.453125q0.296875 -0.109375 0.671875 -0.21875q0.390625 -0.109375 0.796875 -0.1875q0.421875 -0.078125 0.875 -0.125q0.453125 -0.0625 0.921875 -0.0625q0.84375 0 1.515625 0.1875q0.6875 0.1875 1.15625 0.578125q0.46875 0.375 0.71875 0.953125q0.25 0.5625 0.25 1.34375l0 6.421875l-1.453125 0zm-0.171875 -4.234375l-2.0625 0q-0.59375 0 -1.03125 0.125q-0.4375 0.109375 -0.71875 0.34375q-0.28125 0.21875 -0.421875 0.53125q-0.125 0.296875 -0.125 0.6875q0 0.28125 0.078125 0.53125q0.09375 0.234375 0.28125 0.421875q0.1875 0.1875 0.484375 0.3125q0.296875 0.109375 0.71875 0.109375q0.5625 0 1.28125 -0.34375q0.71875 -0.34375 1.515625 -1.078125l0 -1.640625z" fill-rule="nonzero"></path><path fill="#980000" d="m230.9671 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m253.85669 110.23098q0 1.15625 -0.328125 2.078125q-0.3125 0.90625 -0.90625 1.546875q-0.578125 0.640625 -1.421875 0.984375q-0.84375 0.328125 -1.90625 0.328125q-0.828125 0 -1.6875 -0.15625q-0.859375 -0.15625 -1.703125 -0.5l0 -12.5625l1.609375 0l0 3.609375l-0.0625 1.71875q0.6875 -0.9375 1.484375 -1.3125q0.796875 -0.390625 1.703125 -0.390625q0.796875 0 1.390625 0.34375q0.609375 0.328125 1.015625 0.9375q0.40625 0.609375 0.609375 1.46875q0.203125 0.859375 0.203125 1.90625zm-1.640625 0.078125q0 -0.734375 -0.109375 -1.34375q-0.109375 -0.609375 -0.34375 -1.046875q-0.234375 -0.4375 -0.59375 -0.6875q-0.359375 -0.25 -0.859375 -0.25q-0.3125 0 -0.625 0.109375q-0.3125 0.09375 -0.65625 0.328125q-0.328125 0.21875 -0.703125 0.59375q-0.375 0.375 -0.8125 0.9375l0 4.515625q0.484375 0.1875 0.96875 0.296875q0.5 0.09375 0.9375 0.09375q0.5625 0 1.0625 -0.171875q0.5 -0.171875 0.890625 -0.578125q0.390625 -0.421875 0.609375 -1.09375q0.234375 -0.6875 0.234375 -1.703125z" fill-rule="nonzero"></path><path fill="#980000" d="m271.99628 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m294.1671 114.715355q-0.625 0.234375 -1.296875 0.34375q-0.65625 0.125 -1.359375 0.125q-2.21875 0 -3.40625 -1.1875q-1.1875 -1.203125 -1.1875 -3.5q0 -1.109375 0.34375 -2.0q0.34375 -0.90625 0.953125 -1.546875q0.625 -0.640625 1.484375 -0.984375q0.875 -0.34375 1.90625 -0.34375q0.734375 0 1.359375 0.109375q0.625 0.09375 1.203125 0.3125l0 1.546875q-0.59375 -0.3125 -1.234375 -0.453125q-0.625 -0.15625 -1.28125 -0.15625q-0.625 0 -1.1875 0.25q-0.546875 0.234375 -0.96875 0.6875q-0.40625 0.4375 -0.65625 1.078125q-0.234375 0.640625 -0.234375 1.4375q0 1.6875 0.8125 2.53125q0.828125 0.84375 2.28125 0.84375q0.65625 0 1.265625 -0.140625q0.625 -0.15625 1.203125 -0.453125l0 1.5z" fill-rule="nonzero"></path><path fill="#980000" d="m313.02545 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375zm6.5854187 -17.703125q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#980000" d="m340.12546 101.246605q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#000000" d="m349.19525 116.965355q0.484375 0.015625 0.921875 -0.09375q0.453125 -0.09375 0.78125 -0.296875q0.34375 -0.203125 0.546875 -0.5q0.203125 -0.296875 0.203125 -0.671875q0 -0.390625 -0.140625 -0.625q-0.125 -0.25 -0.296875 -0.453125q-0.171875 -0.203125 -0.3125 -0.4375q-0.125 -0.234375 -0.125 -0.625q0 -0.1875 0.078125 -0.390625q0.078125 -0.21875 0.21875 -0.390625q0.15625 -0.1875 0.390625 -0.296875q0.25 -0.109375 0.5625 -0.109375q0.328125 0 0.625 0.140625q0.3125 0.125 0.53125 0.40625q0.234375 0.28125 0.359375 0.703125q0.140625 0.40625 0.140625 0.96875q0 0.78125 -0.28125 1.484375q-0.28125 0.703125 -0.84375 1.25q-0.5625 0.5625 -1.40625 0.875q-0.828125 0.328125 -1.953125 0.328125l0 -1.265625z" fill-rule="nonzero"></path><path fill="#0000ff" d="m368.52234 110.590355q0 -1.1875 0.3125 -2.109375q0.328125 -0.921875 0.921875 -1.546875q0.609375 -0.640625 1.4375 -0.96875q0.84375 -0.328125 1.875 -0.328125q0.453125 0 0.875 0.0625q0.4375 0.046875 0.859375 0.171875l0 -3.921875l1.625 0l0 13.109375l-1.453125 0l-0.0625 -1.765625q-0.671875 0.984375 -1.46875 1.46875q-0.78125 0.46875 -1.703125 0.46875q-0.796875 0 -1.40625 -0.328125q-0.59375 -0.34375 -1.0 -0.953125q-0.40625 -0.609375 -0.609375 -1.453125q-0.203125 -0.859375 -0.203125 -1.90625zm1.640625 -0.09375q0 1.6875 0.5 2.515625q0.5 0.828125 1.40625 0.828125q0.609375 0 1.296875 -0.546875q0.6875 -0.546875 1.4375 -1.625l0 -4.3125q-0.40625 -0.1875 -0.890625 -0.28125q-0.484375 -0.109375 -0.953125 -0.109375q-1.3125 0 -2.0625 0.859375q-0.734375 0.84375 -0.734375 2.671875z" fill-rule="nonzero"></path><path fill="#980000" d="m395.0838 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m417.89526 109.902855q0 0.34375 -0.015625 0.578125q-0.015625 0.234375 -0.03125 0.4375l-6.53125 0q0 1.4375 0.796875 2.203125q0.796875 0.765625 2.296875 0.765625q0.40625 0 0.8125 -0.03125q0.40625 -0.046875 0.78125 -0.09375q0.390625 -0.0625 0.734375 -0.125q0.34375 -0.078125 0.640625 -0.15625l0 1.328125q-0.65625 0.1875 -1.484375 0.296875q-0.828125 0.125 -1.71875 0.125q-1.203125 0 -2.0625 -0.328125q-0.859375 -0.328125 -1.421875 -0.9375q-0.546875 -0.625 -0.8125 -1.515625q-0.265625 -0.890625 -0.265625 -2.03125q0 -0.984375 0.28125 -1.859375q0.296875 -0.875 0.828125 -1.53125q0.546875 -0.671875 1.328125 -1.0625q0.796875 -0.390625 1.796875 -0.390625q0.984375 0 1.734375 0.3125q0.75 0.296875 1.265625 0.859375q0.515625 0.5625 0.78125 1.375q0.265625 0.796875 0.265625 1.78125zm-1.6875 -0.21875q0.03125 -0.625 -0.125 -1.140625q-0.140625 -0.515625 -0.453125 -0.890625q-0.3125 -0.375 -0.78125 -0.578125q-0.453125 -0.203125 -1.078125 -0.203125q-0.515625 0 -0.953125 0.203125q-0.4375 0.203125 -0.765625 0.578125q-0.3125 0.359375 -0.5 0.890625q-0.1875 0.515625 -0.234375 1.140625l4.890625 0z" fill-rule="nonzero"></path><path fill="#980000" d="m436.11298 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375zm6.5854187 -17.703125q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#000000" d="m451.7682 116.965355q0.484375 0.015625 0.921875 -0.09375q0.453125 -0.09375 0.78125 -0.296875q0.34375 -0.203125 0.546875 -0.5q0.203125 -0.296875 0.203125 -0.671875q0 -0.390625 -0.140625 -0.625q-0.125 -0.25 -0.296875 -0.453125q-0.171875 -0.203125 -0.3125 -0.4375q-0.125 -0.234375 -0.125 -0.625q0 -0.1875 0.078125 -0.390625q0.078125 -0.21875 0.21875 -0.390625q0.15625 -0.1875 0.390625 -0.296875q0.25 -0.109375 0.5625 -0.109375q0.328125 0 0.625 0.140625q0.3125 0.125 0.53125 0.40625q0.234375 0.28125 0.359375 0.703125q0.140625 0.40625 0.140625 0.96875q0 0.78125 -0.28125 1.484375q-0.28125 0.703125 -0.84375 1.25q-0.5625 0.5625 -1.40625 0.875q-0.828125 0.328125 -1.953125 0.328125l0 -1.265625z" fill-rule="nonzero"></path><path fill="#0000ff" d="m479.82965 103.44973q-1.265625 -0.265625 -2.1875 -0.265625q-2.1875 0 -2.1875 2.28125l0 1.640625l4.09375 0l0 1.34375l-4.09375 0l0 6.609375l-1.640625 0l0 -6.609375l-2.984375 0l0 -1.34375l2.984375 0l0 -1.546875q0 -3.71875 3.875 -3.71875q0.96875 0 2.140625 0.21875l0 1.390625zm-9.75 2.296875l0 0z" fill-rule="nonzero"></path><path fill="#980000" d="m497.65674 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375zm6.5854187 -17.703125q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#980000" d="m524.7567 101.246605q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375zm20.514648 0q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#d0e0e3" d="m433.02225 144.22713l64.97638 0l0 384.75592l-64.97638 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m433.02225 144.22713l64.97638 0l0 384.75592l-64.97638 0z" fill-rule="nonzero"></path><path fill="#434343" d="m469.6318 285.2438q0 0.859375 -0.359375 1.515625q-0.34375 0.640625 -0.984375 1.078125q-0.625 0.421875 -1.515625 0.640625q-0.875 0.21875 -1.953125 0.21875q-0.46875 0 -0.953125 -0.046875q-0.484375 -0.03125 -0.921875 -0.09375q-0.4375 -0.046875 -0.828125 -0.125q-0.390625 -0.078125 -0.703125 -0.15625l0 -1.59375q0.6875 0.25 1.546875 0.40625q0.875 0.140625 1.984375 0.140625q0.796875 0 1.359375 -0.125q0.5625 -0.125 0.921875 -0.359375q0.359375 -0.25 0.515625 -0.59375q0.171875 -0.359375 0.171875 -0.8125q0 -0.5 -0.28125 -0.84375q-0.265625 -0.34375 -0.71875 -0.609375q-0.4375 -0.28125 -1.015625 -0.5q-0.578125 -0.234375 -1.171875 -0.46875q-0.59375 -0.25 -1.171875 -0.53125q-0.5625 -0.28125 -1.015625 -0.671875q-0.4375 -0.390625 -0.71875 -0.90625q-0.265625 -0.515625 -0.265625 -1.234375q0 -0.625 0.265625 -1.21875q0.265625 -0.609375 0.8125 -1.078125q0.546875 -0.46875 1.40625 -0.75q0.859375 -0.296875 2.046875 -0.296875q0.296875 0 0.65625 0.03125q0.359375 0.03125 0.71875 0.078125q0.375 0.046875 0.734375 0.125q0.359375 0.0625 0.65625 0.125l0 1.484375q-0.71875 -0.203125 -1.4375 -0.296875q-0.703125 -0.109375 -1.375 -0.109375q-1.421875 0 -2.09375 0.46875q-0.65625 0.46875 -0.65625 1.265625q0 0.5 0.265625 0.859375q0.28125 0.34375 0.71875 0.625q0.453125 0.28125 1.015625 0.515625q0.578125 0.21875 1.171875 0.46875q0.59375 0.234375 1.15625 0.515625q0.578125 0.28125 1.015625 0.6875q0.453125 0.390625 0.71875 0.921875q0.28125 0.515625 0.28125 1.25z" fill-rule="nonzero"></path><path fill="#434343" d="m469.14743 310.52505l-6.90625 0l0 -12.125l6.90625 0l0 1.390625l-5.25 0l0 3.75l5.03125 0l0 1.40625l-5.03125 0l0 4.171875l5.25 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m470.1318 332.52505l-1.859375 0l-1.8125 -3.875q-0.203125 -0.453125 -0.421875 -0.734375q-0.203125 -0.296875 -0.453125 -0.46875q-0.25 -0.171875 -0.546875 -0.25q-0.28125 -0.078125 -0.640625 -0.078125l-0.78125 0l0 5.40625l-1.65625 0l0 -12.125l3.25 0q1.046875 0 1.8125 0.234375q0.765625 0.234375 1.25 0.65625q0.484375 0.40625 0.703125 1.0q0.234375 0.578125 0.234375 1.296875q0 0.5625 -0.171875 1.078125q-0.15625 0.5 -0.484375 0.921875q-0.328125 0.40625 -0.828125 0.71875q-0.484375 0.296875 -1.109375 0.4375q0.515625 0.171875 0.859375 0.625q0.359375 0.4375 0.734375 1.171875l1.921875 3.984375zm-2.640625 -8.796875q0 -0.96875 -0.609375 -1.453125q-0.609375 -0.484375 -1.71875 -0.484375l-1.546875 0l0 4.015625l1.328125 0q0.59375 0 1.0625 -0.140625q0.46875 -0.140625 0.796875 -0.40625q0.328125 -0.265625 0.5 -0.640625q0.1875 -0.390625 0.1875 -0.890625z" fill-rule="nonzero"></path><path fill="#434343" d="m470.78806 342.40005l-4.109375 12.125l-2.234375 0l-4.03125 -12.125l1.875 0l2.609375 8.171875l0.75 2.390625l0.75 -2.390625l2.625 -8.171875l1.765625 0z" fill-rule="nonzero"></path><path fill="#434343" d="m469.14743 376.52505l-6.90625 0l0 -12.125l6.90625 0l0 1.390625l-5.25 0l0 3.75l5.03125 0l0 1.40625l-5.03125 0l0 4.171875l5.25 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m470.1318 398.52505l-1.859375 0l-1.8125 -3.875q-0.203125 -0.453125 -0.421875 -0.734375q-0.203125 -0.296875 -0.453125 -0.46875q-0.25 -0.171875 -0.546875 -0.25q-0.28125 -0.078125 -0.640625 -0.078125l-0.78125 0l0 5.40625l-1.65625 0l0 -12.125l3.25 0q1.046875 0 1.8125 0.234375q0.765625 0.234375 1.25 0.65625q0.484375 0.40625 0.703125 1.0q0.234375 0.578125 0.234375 1.296875q0 0.5625 -0.171875 1.078125q-0.15625 0.5 -0.484375 0.921875q-0.328125 0.40625 -0.828125 0.71875q-0.484375 0.296875 -1.109375 0.4375q0.515625 0.171875 0.859375 0.625q0.359375 0.4375 0.734375 1.171875l1.921875 3.984375zm-2.640625 -8.796875q0 -0.96875 -0.609375 -1.453125q-0.609375 -0.484375 -1.71875 -0.484375l-1.546875 0l0 4.015625l1.328125 0q0.59375 0 1.0625 -0.140625q0.46875 -0.140625 0.796875 -0.40625q0.328125 -0.265625 0.5 -0.640625q0.1875 -0.390625 0.1875 -0.890625z" fill-rule="nonzero"></path><path fill="#d0e0e3" d="m208.50131 144.22713l64.97638 0l0 384.75592l-64.97638 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m208.50131 144.22713l64.97638 0l0 384.75592l-64.97638 0z" fill-rule="nonzero"></path><path fill="#434343" d="m245.09523 288.07193q-1.453125 0.609375 -3.0625 0.609375q-2.5625 0 -3.9375 -1.53125q-1.375 -1.546875 -1.375 -4.546875q0 -1.46875 0.375 -2.640625q0.375 -1.171875 1.078125 -2.0q0.71875 -0.828125 1.71875 -1.265625q1.0 -0.453125 2.234375 -0.453125q0.84375 0 1.5625 0.15625q0.734375 0.140625 1.40625 0.4375l0 1.625q-0.65625 -0.359375 -1.375 -0.546875q-0.703125 -0.203125 -1.53125 -0.203125q-0.859375 0 -1.546875 0.328125q-0.6875 0.3125 -1.171875 0.921875q-0.484375 0.609375 -0.75 1.484375q-0.25 0.875 -0.25 2.0q0 2.359375 0.953125 3.5625q0.953125 1.1875 2.796875 1.1875q0.78125 0 1.5 -0.171875q0.71875 -0.1875 1.375 -0.515625l0 1.5625z" fill-rule="nonzero"></path><path fill="#434343" d="m245.00148 310.52505l-6.984375 0l0 -12.125l1.6875 0l0 10.71875l5.296875 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m240.25148 321.79068l-2.796875 0l0 -1.390625l7.25 0l0 1.390625l-2.78125 0l0 9.328125l2.78125 0l0 1.40625l-7.25 0l0 -1.40625l2.796875 0l0 -9.328125z" fill-rule="nonzero"></path><path fill="#434343" d="m244.62648 354.52505l-6.90625 0l0 -12.125l6.90625 0l0 1.390625l-5.25 0l0 3.75l5.03125 0l0 1.40625l-5.03125 0l0 4.171875l5.25 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m245.22023 376.52505l-2.15625 0l-3.53125 -7.5625l-1.03125 -2.421875l0 6.109375l0 3.875l-1.53125 0l0 -12.125l2.125 0l3.359375 7.15625l1.21875 2.78125l0 -6.5l0 -3.4375l1.546875 0l0 12.125z" fill-rule="nonzero"></path><path fill="#434343" d="m245.5171 387.8063l-3.59375 0l0 10.71875l-1.671875 0l0 -10.71875l-3.59375 0l0 -1.40625l8.859375 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m275.37988 154.37495l159.52756 23.685043" fill-rule="nonzero"></path><path stroke="#e06666" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m275.3799 154.37495l156.1376 23.181747" fill-rule="evenodd"></path><path fill="#e06666" stroke="#e06666" stroke-width="1.0" stroke-linecap="butt" d="m431.51752 177.5567l-1.2775574 0.9472351l3.2214355 -0.6586304l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m343.18604 134.28076l22.677185 2.9606323l-2.551178 14.992126l-22.677185 -2.9606323z" fill-rule="nonzero"></path><path fill="#e06666" d="m354.0799 156.38391l1.5900879 0.42819214q-0.5466919 1.6304474 -1.8093567 2.4425812q-1.2600708 0.79670715 -2.871399 0.5863342q-1.9986572 -0.26094055 -3.0022888 -1.7156067q-0.98553467 -1.4680634 -0.57107544 -3.903656q0.26757812 -1.5723419 0.97821045 -2.6771393q0.72875977 -1.1181946 1.8974915 -1.5643921q1.1868591 -0.45959473 2.4418335 -0.2957611q1.5958252 0.2083435 2.4664917 1.1414185q0.870697 0.9330597 0.9132385 2.451355l-1.6532898 0.03627014q-0.06451416 -1.0169067 -0.56933594 -1.5870667q-0.5048218 -0.57014465 -1.3259583 -0.6773529q-1.2549744 -0.16383362 -2.1972961 0.6270752q-0.92681885 0.79293823 -1.2547302 2.7198334q-0.33312988 1.9577179 0.27389526 2.9509125q0.6096802 0.9777832 1.8181763 1.1355591q0.9760742 0.12742615 1.7265015 -0.37338257q0.75302124 -0.51623535 1.1488037 -1.725174z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m273.4716 190.73802l160.53543 -11.842529" fill-rule="nonzero"></path><path stroke="#e06666" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m276.88937 190.48589l157.11765 -11.590378" fill-rule="evenodd"></path><path fill="#e06666" stroke="#e06666" stroke-width="1.0" stroke-linecap="butt" d="m276.88937 190.4859l1.0388184 -1.2042694l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m275.37988 209.34238l158.61417 19.433075" fill-rule="nonzero"></path><path stroke="#93c47d" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m275.3799 209.34238l155.2125 19.016296" fill-rule="evenodd"></path><path fill="#93c47d" stroke="#93c47d" stroke-width="1.0" stroke-linecap="butt" d="m430.5924 228.35869l-1.2529907 0.9794769l3.2035828 -0.7404938l-2.9300842 -1.4919891z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m333.4637 185.07516l55.62204 7.3385773l-2.551178 14.992126l-55.62204 -7.3385773z" fill-rule="nonzero"></path><path fill="#93c47d" d="m344.0724 210.80086l-1.5335693 -0.20233154l2.282013 -13.410477l1.6420288 0.21664429l-0.81314087 4.7784424q1.2763367 -1.1712341 2.9028625 -0.9566345q0.898468 0.11853027 1.6410217 0.5947571q0.74520874 0.46080017 1.1436157 1.1910706q0.4165039 0.7168884 0.5534363 1.6805725q0.13696289 0.96369934 -0.041412354 2.0118713q-0.42492676 2.4971313 -1.897644 3.70549q-1.4700928 1.1929626 -3.2050476 0.96406555q-1.7349548 -0.22891235 -2.4669495 -1.7911987l-0.20721436 1.2177277zm0.82388306 -4.9346313q-0.29638672 1.7418213 0.0345459 2.589264q0.5749512 1.3682098 1.907135 1.5439758q1.0843506 0.1430664 2.0317688 -0.6775665q0.9500427 -0.83602905 1.2674255 -2.7011719q0.32263184 -1.8959656 -0.28164673 -2.905548q-0.6043091 -1.0095978 -1.6731567 -1.1506195q-1.0843506 -0.1430664 -2.0343933 0.6929779q-0.9500427 0.8360443 -1.2516785 2.6086884zm10.417725 10.452469q-1.069397 -1.90625 -1.6235046 -4.327667q-0.55148315 -2.4368134 -0.13180542 -4.9031067q0.37246704 -2.1888428 1.4079285 -4.085312q1.22995 -2.2017975 3.340271 -4.2716675l1.1927795 0.15737915q-1.4379578 1.7488098 -1.9332886 2.518753q-0.7727356 1.1903992 -1.3315125 2.5193634q-0.6939087 1.6578522 -0.9876709 3.384262q-0.7501831 4.408493 1.2595825 9.165375l-1.1927795 -0.15737915zm10.658783 -6.2690277l1.5898132 0.43040466q-0.54663086 1.6300049 -1.809082 2.4405823q-1.2598572 0.795166 -2.8709106 0.5826111q-1.998291 -0.26365662 -3.0017395 -1.7199249q-0.98532104 -1.469635 -0.5708618 -3.9050903q0.2675476 -1.5722656 0.9780884 -2.6763153q0.7286377 -1.1174164 1.8971863 -1.5621338q1.1866455 -0.45809937 2.4414062 -0.2925415q1.5955505 0.21051025 2.466034 1.1448975q0.8705139 0.9343872 0.9130249 2.453003l-1.6530151 0.034072876q-0.06448364 -1.0171661 -0.56918335 -1.588089q-0.5047302 -0.57092285 -1.3257141 -0.679245q-1.2547607 -0.16555786 -2.19693 0.62423706q-0.92666626 0.79185486 -1.2545471 2.7186432q-0.33312988 1.9576111 0.2737732 2.9517975q0.6095581 0.97875977 1.8178406 1.1381836q0.9758911 0.12875366 1.7261963 -0.3711548q0.75289917 -0.5153198 1.1486206 -1.723938zm2.6727295 8.027954l-1.1772766 -0.15533447q3.4894104 -4.0313263 4.2395935 -8.439835q0.2911682 -1.7109833 0.19244385 -3.4576569q-0.09185791 -1.4147949 -0.43444824 -2.7523499q-0.21463013 -0.8793793 -1.0047302 -2.937912l1.1773071 0.15531921q1.3441162 2.52565 1.771698 4.946121q0.37420654 2.0824585 0.0017089844 4.2713013q-0.41967773 2.4662933 -1.7735596 4.651718q-1.3357544 2.1720734 -2.9927368 3.718628z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m276.33405 274.71732l159.52756 23.685028" fill-rule="nonzero"></path><path stroke="#bf9000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m276.33405 274.71732l156.1376 23.181732" fill-rule="evenodd"></path><path fill="#bf9000" stroke="#bf9000" stroke-width="1.0" stroke-linecap="butt" d="m432.47165 297.89905l-1.2775269 0.9472351l3.221405 -0.6586304l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m343.18744 255.03127l22.677185 2.9606476l-2.551178 14.992126l-22.677185 -2.9606323z" fill-rule="nonzero"></path><path fill="#bf9000" d="m353.7983 277.53867l1.667572 0.43832397q-0.6546631 1.4272766 -1.8963318 2.1160583q-1.2236023 0.67541504 -2.927887 0.45291138q-2.138092 -0.2791443 -3.170105 -1.7532654q-1.0320129 -1.4741516 -0.62802124 -3.848053q0.41708374 -2.4510193 1.9028931 -3.6437073q1.5013123 -1.1906738 3.5309448 -0.9256897q1.9676819 0.25689697 2.9815674 1.7444153q1.0138855 1.4875183 0.6020508 3.9077148q-0.023590088 0.13873291 -0.08892822 0.42959595l-7.281952 -0.9507141q-0.1798706 1.6153259 0.4970398 2.570343q0.6768799 0.9550476 1.9008484 1.1148376q0.8986206 0.11734009 1.6125488 -0.2621765q0.73205566 -0.39291382 1.29776 -1.3905945zm-4.9844055 -3.3768005l5.453705 0.7120056q0.0987854 -1.2319336 -0.30758667 -1.9153137q-0.62753296 -1.0588989 -1.8979797 -1.224762q-1.1310425 -0.14764404 -2.0523682 0.5199585q-0.90319824 0.6541748 -1.1957703 1.9081116z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m274.42575 311.08038l160.53543 -11.842529" fill-rule="nonzero"></path><path stroke="#bf9000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m277.84354 310.82825l157.11761 -11.590393" fill-rule="evenodd"></path><path fill="#bf9000" stroke="#bf9000" stroke-width="1.0" stroke-linecap="butt" d="m277.84354 310.82825l1.0387878 -1.2042542l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m275.38113 346.36475l159.52756 23.685028" fill-rule="nonzero"></path><path stroke="#134f5c" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m275.38116 346.36475l156.1376 23.181732" fill-rule="evenodd"></path><path fill="#134f5c" stroke="#134f5c" stroke-width="1.0" stroke-linecap="butt" d="m431.51877 369.54648l-1.2775269 0.9472351l3.221405 -0.6586304l-2.8910828 -1.5661316z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m343.18732 325.66556l22.677155 2.9606323l-2.551178 14.992126l-22.677185 -2.9606323z" fill-rule="nonzero"></path><path fill="#134f5c" d="m349.54965 350.8171l1.4348755 -8.432098l-1.4718628 -0.19213867l0.22033691 -1.2948914l1.4718933 0.19216919l0.17575073 -1.0328064q0.16525269 -0.9711609 0.4169922 -1.4267883q0.34259033 -0.6170654 1.0123901 -0.923584q0.67245483 -0.3218994 1.757019 -0.18029785q0.6972046 0.091033936 1.5231018 0.3564148l-0.49447632 1.4166565q-0.49554443 -0.15924072 -0.96035767 -0.21990967q-0.7591553 -0.099121094 -1.124115 0.18414307q-0.3623352 0.26785278 -0.51187134 1.1465149l-0.15213013 0.8940735l1.9057007 0.24880981l-0.22033691 1.2948608l-1.9057007 -0.24880981l-1.4348755 8.432098l-1.642334 -0.2144165z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m273.47284 382.7278l160.53543 -11.842529" fill-rule="nonzero"></path><path stroke="#134f5c" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m276.89062 382.47568l157.11765 -11.590393" fill-rule="evenodd"></path><path fill="#134f5c" stroke="#134f5c" stroke-width="1.0" stroke-linecap="butt" d="m276.89062 382.47568l1.0388184 -1.2042542l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m275.37988 417.41953l159.52756 23.685059" fill-rule="nonzero"></path><path stroke="#a64d79" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m275.3799 417.41953l156.1376 23.181732" fill-rule="evenodd"></path><path fill="#a64d79" stroke="#a64d79" stroke-width="1.0" stroke-linecap="butt" d="m431.51752 440.6013l-1.2775574 0.9472351l3.2214355 -0.6586304l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m318.4788 393.75833l72.09448 9.480316l-2.551178 14.992126l-72.09448 -9.480316z" fill-rule="nonzero"></path><path fill="#741b47" d="m336.46692 420.44815l0.20986938 -1.2331848q-1.1760864 1.3267517 -2.973114 1.0904541q-1.1618652 -0.15280151 -2.0457764 -0.91516113q-0.8658142 -0.77575684 -1.2139282 -1.9877319q-0.32995605 -1.2253723 -0.075531006 -2.720581q0.24655151 -1.4489746 0.928772 -2.5727234q0.6977234 -1.1217346 1.7657471 -1.6274414q1.0835266 -0.5036621 2.29187 -0.34475708q0.8830261 0.116119385 1.501709 0.5756836q0.6341553 0.4616394 0.9656372 1.1198425l0.8183899 -4.8093567l1.6421204 0.21594238l-2.282074 13.410675l-1.5336914 -0.20166016zm-4.4098816 -5.544159q-0.31741333 1.8651733 0.3152771 2.8939514q0.64819336 1.0307922 1.717102 1.1713562q1.0844116 0.14260864 1.993042 -0.63619995q0.90859985 -0.7788086 1.2181091 -2.5977478q0.3383789 -1.9884644 -0.2788086 -3.0151978q-0.6145935 -1.0421448 -1.7454834 -1.1908569q-1.099884 -0.14465332 -1.995636 0.6516113q-0.89572144 0.79626465 -1.2236023 2.7230835zm10.849762 10.425415q-1.0694885 -1.9057007 -1.6236267 -4.326721q-0.5515442 -2.4364624 -0.13183594 -4.9028015q0.37246704 -2.1888733 1.407959 -4.085663q1.230011 -2.2022095 3.3404236 -4.2728577l1.1928406 0.15686035q-1.4380188 1.7493286 -1.9333496 2.5194397q-0.77279663 1.1906738 -1.3315735 2.5197754q-0.6939392 1.6580505 -0.98773193 3.384491q-0.7501831 4.4085693 1.2597351 9.164337l-1.1928406 -0.15686035zm10.895721 -5.8008423l1.6673584 0.43988037q-0.65460205 1.4268494 -1.8961487 2.1145935q-1.2234192 0.67437744 -2.9275208 0.45028687q-2.137848 -0.28112793 -3.1697083 -1.7563782q-1.0318604 -1.4752197 -0.62789917 -3.8490906q0.41705322 -2.4508972 1.90271 -3.642395q1.5011597 -1.1894226 3.530548 -0.9225769q1.9674377 0.25872803 2.9812012 1.747345q1.0137329 1.4886475 0.6019287 3.908722q-0.023620605 0.13873291 -0.08895874 0.42956543l-7.281067 -0.9574585q-0.17984009 1.6153564 0.49694824 2.5711365q0.67678833 0.9557495 1.9006348 1.1166992q0.89849854 0.118133545 1.6123657 -0.2607727q0.7319641 -0.39230347 1.2976074 -1.3895569zm-4.9837646 -3.3817444l5.453064 0.71707153q0.0987854 -1.2320251 -0.30752563 -1.9158325q-0.6274414 -1.0596008 -1.8977661 -1.226654q-1.1308899 -0.14868164 -2.052124 0.51812744q-0.9031067 0.6534729 -1.1956482 1.9072876zm8.479828 7.0406494l0.32000732 -1.8805847l1.8899841 0.24853516l-0.32000732 1.8805847q-0.17575073 1.0327759 -0.65509033 1.6158752q-0.4819641 0.59854126 -1.329773 0.83377075l-0.3440857 -0.77020264q0.56344604 -0.14654541 0.88739014 -0.5609741q0.3239441 -0.4144287 0.4965515 -1.2427368l-0.9449768 -0.12426758zm5.1080627 0.6717224l1.434845 -8.431793l-1.4717102 -0.19351196l0.22033691 -1.2948303l1.4717102 0.19351196l0.17572021 -1.0327759q0.16525269 -0.97109985 0.4169922 -1.4265442q0.3425293 -0.6168823 1.0122986 -0.9227905q0.6723633 -0.32131958 1.7567749 -0.17871094q0.69714355 0.09164429 1.5229492 0.35784912l-0.4944458 1.4163818q-0.4954834 -0.159729 -0.9602356 -0.2208252q-0.75909424 -0.099823 -1.1239929 0.18313599q-0.3623047 0.2675476 -0.5118103 1.1461792l-0.15216064 0.89404297l1.9054871 0.25057983l-0.22033691 1.2947998l-1.9054871 -0.25054932l-1.4348145 8.431763l-1.6421204 -0.21591187zm5.1492004 4.7115173l-1.1773682 -0.15481567q3.4895935 -4.0325623 4.239807 -8.441132q0.2911377 -1.711029 0.19238281 -3.4575806q-0.09185791 -1.4146729 -0.43447876 -2.7519836q-0.21466064 -0.87924194 -1.0047913 -2.9373474l1.1773682 0.15484619q1.3442383 2.5249329 1.7718201 4.9450684q0.37423706 2.0821838 0.0017700195 4.271057q-0.41970825 2.466339 -1.7736206 4.6522217q-1.335846 2.1725159 -2.9928894 3.7196655z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m273.4716 453.7826l160.53543 -11.842499" fill-rule="nonzero"></path><path stroke="#741b47" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m276.88937 453.53046l157.11765 -11.590363" fill-rule="evenodd"></path><path fill="#741b47" stroke="#741b47" stroke-width="1.0" stroke-linecap="butt" d="m276.88937 453.5305l1.0388184 -1.2042847l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m274.42447 484.65225l159.52759 23.685028" fill-rule="nonzero"></path><path stroke="#3d85c6" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m274.42447 484.65222l156.13766 23.181732" fill-rule="evenodd"></path><path fill="#3d85c6" stroke="#3d85c6" stroke-width="1.0" stroke-linecap="butt" d="m430.56213 507.83395l-1.2775574 0.9472351l3.2214355 -0.65859985l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m272.51617 521.0153l160.53543 -11.842529" fill-rule="nonzero"></path><path stroke="#3c78d8" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m275.93396 520.7632l157.11765 -11.590393" fill-rule="evenodd"></path><path fill="#3c78d8" stroke="#3c78d8" stroke-width="1.0" stroke-linecap="butt" d="m275.934 520.7632l1.0387878 -1.2042847l-2.9986572 1.348877l3.1641235 0.89416504z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m318.4788 462.90244l72.09448 9.480316l-2.551178 14.992126l-72.09448 -9.480316z" fill-rule="nonzero"></path><path fill="#3d85c6" d="m334.14395 488.05753q-1.0632629 0.6639099 -1.970398 0.87557983q-0.9045105 0.19625854 -1.8804626 0.06790161q-1.5956421 -0.20980835 -2.3320312 -1.0946045q-0.73373413 -0.90023804 -0.5265198 -2.117981q0.123291016 -0.7244873 0.5482788 -1.267456q0.4250183 -0.54299927 1.0120544 -0.8282471q0.58703613 -0.28527832 1.284668 -0.3826599q0.5193176 -0.07354736 1.5136719 -0.053100586q2.0558777 0.018188477 3.0559387 -0.1812439q0.05770874 -0.33914185 0.07345581 -0.4316101q0.17050171 -1.0019531 -0.22341919 -1.4792786q-0.54074097 -0.63842773 -1.7955627 -0.8034363q-1.1618652 -0.15280151 -1.7877502 0.1746521q-0.6259155 0.3274536 -1.067627 1.3410034l-1.5872803 -0.4451294q0.40811157 -1.0021973 1.011383 -1.5690308q0.6187744 -0.5647583 1.62146 -0.77960205q1.020813 -0.22824097 2.2911377 -0.06121826q1.2548218 0.16500854 1.9795532 0.5597534q0.72476196 0.39474487 1.0204773 0.8906555q0.29574585 0.49591064 0.31976318 1.1924744q0.022125244 0.42843628 -0.16412354 1.5228577l-0.37509155 2.2042847q-0.39083862 2.2967834 -0.40283203 2.9255676q0.006164551 0.615448 0.23706055 1.2131348l-1.7350769 -0.22814941q-0.17678833 -0.54330444 -0.12072754 -1.2451172zm0.48486328 -3.6870117q-0.9588318 0.23638916 -2.8159485 0.26010132q-1.046051 0.004272461 -1.4958191 0.13424683q-0.44973755 0.12997437 -0.74243164 0.45394897q-0.2927246 0.3239746 -0.3661499 0.7555847q-0.11279297 0.6628418 0.30685425 1.1750488q0.43777466 0.49884033 1.3982544 0.6251221q0.96047974 0.12631226 1.7749023 -0.19210815q0.8170471 -0.3338318 1.2940063 -0.9960327q0.3604126 -0.53570557 0.54403687 -1.6147461l0.10229492 -0.6011658zm5.703949 9.764496q-1.0694885 -1.9057007 -1.6236572 -4.326721q-0.5515137 -2.4364624 -0.13183594 -4.9028015q0.37249756 -2.1888428 1.4079895 -4.085663q1.230011 -2.202179 3.3404236 -4.272827l1.1928406 0.15686035q-1.4380188 1.7492981 -1.9333496 2.5194397q-0.77279663 1.1906433 -1.3315735 2.5197754q-0.6939392 1.6580505 -0.98773193 3.3844604q-0.7502136 4.4086 1.2597351 9.164337l-1.1928406 -0.15686035zm5.204529 -3.3500366l-1.5336914 -0.20166016l2.282074 -13.410706l1.6421204 0.21594238l-0.81314087 4.778534q1.2763672 -1.1717224 2.9030151 -0.9578247q0.89849854 0.118133545 1.6411133 0.59402466q0.74523926 0.46047974 1.1436768 1.1905212q0.41653442 0.7166748 0.5534973 1.6802673q0.13696289 0.963562 -0.041412354 2.0117493q-0.42492676 2.4971619 -1.8977356 3.7061157q-1.4701538 1.193512 -3.2052002 0.96533203q-1.7350769 -0.22814941 -2.467102 -1.7900391l-0.20721436 1.2177429zm0.82388306 -4.9346924q-0.29641724 1.7418518 0.034576416 2.5891113q0.5749817 1.3678894 1.9072571 1.5430603q1.0844116 0.14260864 2.0318604 -0.67837524q0.95007324 -0.83639526 1.2674866 -2.7015686q0.32263184 -1.8959961 -0.28170776 -2.9052734q-0.6043091 -1.0092773 -1.6732483 -1.1498413q-1.0844116 -0.14257812 -2.0344849 0.69381714q-0.95010376 0.83639526 -1.2517395 2.6090698zm8.363342 6.1427917l0.32003784 -1.8805542l1.8899841 0.24850464l-0.32003784 1.8805847q-0.17575073 1.0327759 -0.65509033 1.6159058q-0.4819641 0.59851074 -1.3297424 0.83374023l-0.3440857 -0.77020264q0.56344604 -0.1465149 0.8873596 -0.5609436q0.3239441 -0.4144287 0.49658203 -1.2427673l-0.9450073 -0.12426758zm11.041382 1.4519348l0.20983887 -1.2331543q-1.1760559 1.3267212 -2.9730835 1.0904236q-1.1618652 -0.152771 -2.0457764 -0.91516113q-0.8658142 -0.77575684 -1.2139282 -1.9877319q-0.32998657 -1.2253418 -0.075531006 -2.7205505q0.24655151 -1.4489746 0.928772 -2.572754q0.6977234 -1.1217346 1.7657471 -1.6274414q1.0835266 -0.5036621 2.29187 -0.34475708q0.8830261 0.116119385 1.501709 0.5757141q0.6341553 0.4616089 0.9656067 1.119812l0.8184204 -4.8093567l1.6421204 0.21594238l-2.282074 13.410706l-1.5336914 -0.20169067zm-4.409912 -5.5441284q-0.3173828 1.8651428 0.31530762 2.893921q0.64819336 1.0307922 1.717102 1.1713562q1.0844116 0.14260864 1.9930115 -0.63619995q0.9086304 -0.7788086 1.2181396 -2.5977478q0.3383789 -1.9884644 -0.2788086 -3.0151978q-0.6145935 -1.0421448 -1.7454834 -1.1908569q-1.099884 -0.1446228 -1.995636 0.65164185q-0.89572144 0.79626465 -1.2236328 2.7230835zm8.773895 10.152435l-1.1773376 -0.15484619q3.4895935 -4.0325623 4.2397766 -8.441132q0.2911682 -1.711029 0.19241333 -3.45755q-0.09188843 -1.4147034 -0.43447876 -2.7520142q-0.21466064 -0.87924194 -1.0047913 -2.937317l1.1773376 0.15481567q1.3442383 2.5249329 1.7718506 4.9450684q0.37423706 2.0822144 0.001739502 4.271057q-0.41967773 2.466339 -1.7736206 4.6522217q-1.3358154 2.1725159 -2.9928894 3.719696z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m7.0 5.0l689.98425 0l0 49.007874l-689.98425 0z" fill-rule="nonzero"></path><path fill="#434343" d="m279.24628 31.451248q-0.8125 0.3125 -1.5625 0.46875q-0.75 0.171875 -1.578125 0.171875q-1.296875 0 -2.3125 -0.390625q-1.0 -0.390625 -1.703125 -1.140625q-0.6875 -0.765625 -1.046875 -1.890625q-0.34375 -1.125 -0.34375 -2.625q0 -1.53125 0.390625 -2.71875q0.390625 -1.1875 1.109375 -2.015625q0.71875 -0.828125 1.75 -1.25q1.046875 -0.4375 2.328125 -0.4375q0.421875 0 0.78125 0.03125q0.375 0.015625 0.71875 0.0625q0.359375 0.046875 0.71875 0.140625q0.359375 0.078125 0.75 0.203125l0 2.265625q-0.78125 -0.375 -1.5 -0.53125q-0.71875 -0.15625 -1.296875 -0.15625q-0.859375 0 -1.484375 0.3125q-0.609375 0.3125 -1.0 0.875q-0.390625 0.5625 -0.578125 1.34375q-0.1875 0.765625 -0.1875 1.6875q0 0.984375 0.1875 1.765625q0.1875 0.765625 0.578125 1.3125q0.40625 0.53125 1.03125 0.8125q0.625 0.28125 1.484375 0.28125q0.296875 0 0.65625 -0.046875q0.359375 -0.0625 0.71875 -0.15625q0.375 -0.109375 0.734375 -0.234375q0.359375 -0.140625 0.65625 -0.28125l0 2.140625zm10.726044 -4.3125q0 1.109375 -0.328125 2.03125q-0.3125 0.921875 -0.90625 1.578125q-0.59375 0.65625 -1.453125 1.03125q-0.859375 0.359375 -1.96875 0.359375q-1.046875 0 -1.875 -0.3125q-0.8125 -0.3125 -1.390625 -0.90625q-0.578125 -0.609375 -0.890625 -1.515625q-0.296875 -0.921875 -0.296875 -2.140625q0 -1.125 0.3125 -2.03125q0.328125 -0.921875 0.921875 -1.578125q0.59375 -0.65625 1.453125 -1.0q0.875 -0.359375 1.953125 -0.359375q1.0625 0 1.890625 0.3125q0.828125 0.296875 1.390625 0.921875q0.578125 0.609375 0.875 1.515625q0.3125 0.90625 0.3125 2.09375zm-2.359375 0.046875q0 -1.46875 -0.5625 -2.203125q-0.546875 -0.734375 -1.625 -0.734375q-0.59375 0 -1.015625 0.234375q-0.40625 0.234375 -0.671875 0.640625q-0.265625 0.390625 -0.390625 0.9375q-0.125 0.53125 -0.125 1.140625q0 1.484375 0.59375 2.234375q0.59375 0.734375 1.609375 0.734375q0.578125 0 0.984375 -0.21875q0.421875 -0.234375 0.671875 -0.625q0.265625 -0.40625 0.390625 -0.953125q0.140625 -0.546875 0.140625 -1.1875zm9.741669 4.734375l0 -6.140625q0 -1.546875 -1.140625 -1.546875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 5.96875l-2.25 0l0 -9.421875l1.953125 0l0.046875 1.390625q0.296875 -0.359375 0.609375 -0.65625q0.3125 -0.296875 0.671875 -0.5q0.359375 -0.21875 0.765625 -0.328125q0.421875 -0.109375 0.953125 -0.109375q0.71875 0 1.25 0.234375q0.546875 0.234375 0.90625 0.671875q0.359375 0.421875 0.53125 1.03125q0.1875 0.609375 0.1875 1.359375l0 6.328125l-2.265625 0zm9.913544 0l-2.609375 0l-3.734375 -9.421875l2.515625 0l1.953125 5.34375l0.59375 1.71875l0.578125 -1.65625l1.96875 -5.40625l2.4375 0l-3.703125 9.421875zm13.194794 -5.4375q0 0.234375 -0.015625 0.609375q-0.015625 0.359375 -0.046875 0.6875l-6.1875 0q0 0.625 0.1875 1.109375q0.1875 0.46875 0.53125 0.78125q0.359375 0.3125 0.84375 0.484375q0.484375 0.171875 1.078125 0.171875q0.6875 0 1.46875 -0.109375q0.78125 -0.109375 1.625 -0.34375l0 1.796875q-0.359375 0.09375 -0.78125 0.1875q-0.421875 0.078125 -0.875 0.140625q-0.4375 0.078125 -0.90625 0.109375q-0.453125 0.03125 -0.875 0.03125q-1.078125 0 -1.9375 -0.3125q-0.84375 -0.3125 -1.4375 -0.90625q-0.59375 -0.59375 -0.90625 -1.46875q-0.3125 -0.890625 -0.3125 -2.046875q0 -1.15625 0.3125 -2.09375q0.3125 -0.9375 0.890625 -1.609375q0.578125 -0.671875 1.390625 -1.03125q0.828125 -0.375 1.828125 -0.375q1.015625 0 1.78125 0.3125q0.765625 0.296875 1.28125 0.859375q0.53125 0.5625 0.796875 1.328125q0.265625 0.765625 0.265625 1.6875zm-2.296875 -0.328125q0 -0.546875 -0.15625 -0.953125q-0.140625 -0.421875 -0.390625 -0.6875q-0.25 -0.28125 -0.59375 -0.40625q-0.34375 -0.125 -0.734375 -0.125q-0.84375 0 -1.390625 0.578125q-0.546875 0.5625 -0.65625 1.59375l3.921875 0zm9.960419 5.765625l0 -6.140625q0 -1.546875 -1.140625 -1.546875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 5.96875l-2.25 0l0 -9.421875l1.953125 0l0.046875 1.390625q0.296875 -0.359375 0.609375 -0.65625q0.3125 -0.296875 0.671875 -0.5q0.359375 -0.21875 0.765625 -0.328125q0.421875 -0.109375 0.953125 -0.109375q0.71875 0 1.25 0.234375q0.546875 0.234375 0.90625 0.671875q0.359375 0.421875 0.53125 1.03125q0.1875 0.609375 0.1875 1.359375l0 6.328125l-2.265625 0zm12.413544 -0.09375q-0.609375 0.140625 -1.234375 0.21875q-0.625 0.09375 -1.171875 0.09375q-0.9375 0 -1.609375 -0.203125q-0.671875 -0.1875 -1.109375 -0.578125q-0.4375 -0.40625 -0.65625 -1.015625q-0.203125 -0.625 -0.203125 -1.484375l0 -4.59375l-2.53125 0l0 -1.765625l2.53125 0l0 -2.421875l2.328125 -0.59375l0 3.015625l3.65625 0l0 1.765625l-3.65625 0l0 4.421875q0 0.8125 0.359375 1.234375q0.375 0.40625 1.25 0.40625q0.5625 0 1.078125 -0.09375q0.53125 -0.09375 0.96875 -0.21875l0 1.8125zm8.038544 -11.90625q0 0.296875 -0.125 0.578125q-0.109375 0.265625 -0.3125 0.46875q-0.1875 0.1875 -0.46875 0.3125q-0.265625 0.109375 -0.578125 0.109375q-0.3125 0 -0.59375 -0.109375q-0.265625 -0.125 -0.46875 -0.3125q-0.203125 -0.203125 -0.3125 -0.46875q-0.109375 -0.28125 -0.109375 -0.578125q0 -0.3125 0.109375 -0.578125q0.109375 -0.265625 0.3125 -0.46875q0.203125 -0.203125 0.46875 -0.3125q0.28125 -0.125 0.59375 -0.125q0.3125 0 0.578125 0.125q0.28125 0.109375 0.46875 0.3125q0.203125 0.203125 0.3125 0.46875q0.125 0.265625 0.125 0.578125zm-2.515625 4.34375l-2.65625 0l0 -1.765625l4.984375 0l0 7.65625l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -5.890625zm15.710419 2.875q0 1.109375 -0.328125 2.03125q-0.3125 0.921875 -0.90625 1.578125q-0.59375 0.65625 -1.453125 1.03125q-0.859375 0.359375 -1.96875 0.359375q-1.046875 0 -1.875 -0.3125q-0.8125 -0.3125 -1.390625 -0.90625q-0.578125 -0.609375 -0.890625 -1.515625q-0.296875 -0.921875 -0.296875 -2.140625q0 -1.125 0.3125 -2.03125q0.328125 -0.921875 0.921875 -1.578125q0.59375 -0.65625 1.453125 -1.0q0.875 -0.359375 1.953125 -0.359375q1.0625 0 1.890625 0.3125q0.828125 0.296875 1.390625 0.921875q0.578125 0.609375 0.875 1.515625q0.3125 0.90625 0.3125 2.09375zm-2.359375 0.046875q0 -1.46875 -0.5625 -2.203125q-0.546875 -0.734375 -1.625 -0.734375q-0.59375 0 -1.015625 0.234375q-0.40625 0.234375 -0.671875 0.640625q-0.265625 0.390625 -0.390625 0.9375q-0.125 0.53125 -0.125 1.140625q0 1.484375 0.59375 2.234375q0.59375 0.734375 1.609375 0.734375q0.578125 0 0.984375 -0.21875q0.421875 -0.234375 0.671875 -0.625q0.265625 -0.40625 0.390625 -0.953125q0.140625 -0.546875 0.140625 -1.1875zm9.741669 4.734375l0 -6.140625q0 -1.546875 -1.140625 -1.546875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 5.96875l-2.25 0l0 -9.421875l1.953125 0l0.046875 1.390625q0.296875 -0.359375 0.609375 -0.65625q0.3125 -0.296875 0.671875 -0.5q0.359375 -0.21875 0.765625 -0.328125q0.421875 -0.109375 0.953125 -0.109375q0.71875 0 1.25 0.234375q0.546875 0.234375 0.90625 0.671875q0.359375 0.421875 0.53125 1.03125q0.1875 0.609375 0.1875 1.359375l0 6.328125l-2.265625 0zm10.554169 0l-0.0625 -1.234375q-0.296875 0.3125 -0.625 0.578125q-0.3125 0.265625 -0.703125 0.46875q-0.390625 0.1875 -0.859375 0.296875q-0.453125 0.109375 -1.0 0.109375q-0.71875 0 -1.265625 -0.21875q-0.546875 -0.21875 -0.921875 -0.59375q-0.375 -0.375 -0.5625 -0.90625q-0.1875 -0.546875 -0.1875 -1.203125q0 -0.671875 0.28125 -1.234375q0.28125 -0.5625 0.859375 -0.96875q0.578125 -0.40625 1.4375 -0.640625q0.875 -0.234375 2.046875 -0.234375l1.234375 0l0 -0.5625q0 -0.359375 -0.109375 -0.65625q-0.09375 -0.296875 -0.328125 -0.5q-0.21875 -0.203125 -0.578125 -0.3125q-0.359375 -0.109375 -0.890625 -0.109375q-0.84375 0 -1.65625 0.1875q-0.8125 0.1875 -1.5625 0.53125l0 -1.8125q0.671875 -0.265625 1.546875 -0.4375q0.890625 -0.171875 1.859375 -0.171875q1.046875 0 1.796875 0.203125q0.75 0.1875 1.234375 0.59375q0.484375 0.390625 0.71875 1.0q0.234375 0.59375 0.234375 1.390625l0 6.4375l-1.9375 0zm-0.328125 -4.171875l-1.375 0q-0.578125 0 -0.984375 0.125q-0.390625 0.109375 -0.640625 0.3125q-0.25 0.1875 -0.375 0.4375q-0.109375 0.25 -0.109375 0.546875q0 0.5625 0.359375 0.875q0.375 0.296875 1.015625 0.296875q0.46875 0 0.984375 -0.34375q0.515625 -0.34375 1.125 -0.984375l0 -1.265625zm7.7104187 -7.171875l-2.65625 0l0 -1.765625l4.984375 0l0 11.34375l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -9.578125zm23.498962 11.34375l-1.703125 -3.890625q-0.25 -0.5625 -0.6875 -0.84375q-0.421875 -0.28125 -1.0 -0.28125l-0.4375 0l0 5.015625l-2.28125 0l0 -12.125l3.53125 0q1.0 0 1.8125 0.171875q0.8125 0.171875 1.390625 0.578125q0.578125 0.390625 0.890625 1.03125q0.3125 0.640625 0.3125 1.5625q0 0.671875 -0.203125 1.203125q-0.1875 0.515625 -0.546875 0.890625q-0.34375 0.375 -0.84375 0.609375q-0.484375 0.21875 -1.046875 0.3125q0.4375 0.09375 0.8125 0.515625q0.375 0.40625 0.734375 1.1875l1.9375 4.0625l-2.671875 0zm-0.5625 -8.546875q0 -0.890625 -0.578125 -1.28125q-0.5625 -0.390625 -1.6875 -0.390625l-1.0 0l0 3.421875l0.921875 0q0.53125 0 0.953125 -0.109375q0.4375 -0.109375 0.734375 -0.328125q0.3125 -0.234375 0.484375 -0.5625q0.171875 -0.328125 0.171875 -0.75zm13.179169 0.28125q0 0.953125 -0.328125 1.75q-0.3125 0.796875 -0.9375 1.390625q-0.609375 0.578125 -1.546875 0.90625q-0.921875 0.328125 -2.140625 0.328125l-1.171875 0l0 3.890625l-2.296875 0l0 -12.125l3.5625 0q1.171875 0 2.078125 0.265625q0.90625 0.25 1.515625 0.75q0.625 0.484375 0.9375 1.203125q0.328125 0.71875 0.328125 1.640625zm-2.390625 0.15625q0 -0.484375 -0.15625 -0.875q-0.15625 -0.390625 -0.46875 -0.671875q-0.3125 -0.28125 -0.78125 -0.421875q-0.46875 -0.15625 -1.125 -0.15625l-1.203125 0l0 4.4375l1.28125 0q0.59375 0 1.046875 -0.15625q0.453125 -0.15625 0.765625 -0.453125q0.3125 -0.3125 0.46875 -0.734375q0.171875 -0.4375 0.171875 -0.96875zm12.288544 7.640625q-0.8125 0.3125 -1.5625 0.46875q-0.75 0.171875 -1.578125 0.171875q-1.296875 0 -2.3125 -0.390625q-1.0 -0.390625 -1.703125 -1.140625q-0.6875 -0.765625 -1.046875 -1.890625q-0.34375 -1.125 -0.34375 -2.625q0 -1.53125 0.390625 -2.71875q0.390625 -1.1875 1.109375 -2.015625q0.71875 -0.828125 1.75 -1.25q1.046875 -0.4375 2.328125 -0.4375q0.421875 0 0.78125 0.03125q0.375 0.015625 0.71875 0.0625q0.359375 0.046875 0.71875 0.140625q0.359375 0.078125 0.75 0.203125l0 2.265625q-0.78125 -0.375 -1.5 -0.53125q-0.71875 -0.15625 -1.296875 -0.15625q-0.859375 0 -1.484375 0.3125q-0.609375 0.3125 -1.0 0.875q-0.390625 0.5625 -0.578125 1.34375q-0.1875 0.765625 -0.1875 1.6875q0 0.984375 0.1875 1.765625q0.1875 0.765625 0.578125 1.3125q0.40625 0.53125 1.03125 0.8125q0.625 0.28125 1.484375 0.28125q0.296875 0 0.65625 -0.046875q0.359375 -0.0625 0.71875 -0.15625q0.375 -0.109375 0.734375 -0.234375q0.359375 -0.140625 0.65625 -0.28125l0 2.140625z" fill-rule="nonzero"></path></g></svg>
+
diff --git a/chapter/2/images/p-2.png b/chapter/2/images/p-2.png
new file mode 100644
index 0000000..ccc5d09
--- /dev/null
+++ b/chapter/2/images/p-2.png
Binary files differ
diff --git a/chapter/2/images/p-2.svg b/chapter/2/images/p-2.svg
new file mode 100644
index 0000000..f5c6b05
--- /dev/null
+++ b/chapter/2/images/p-2.svg
@@ -0,0 +1,4 @@
+<?xml version="1.0" standalone="yes"?>
+
+<svg version="1.1" viewBox="0.0 0.0 720.0 540.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l720.0 0l0 540.0l-720.0 0l0 -540.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l720.0 0l0 540.0l-720.0 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m40.0 88.13911l613.98425 0l0 44.000008l-613.98425 0z" fill-rule="nonzero"></path><path fill="#000000" d="m151.20563 109.902855q0 0.34375 -0.015625 0.578125q-0.015625 0.234375 -0.03125 0.4375l-6.53125 0q0 1.4375 0.796875 2.203125q0.796875 0.765625 2.296875 0.765625q0.40625 0 0.8125 -0.03125q0.40625 -0.046875 0.78125 -0.09375q0.390625 -0.0625 0.734375 -0.125q0.34375 -0.078125 0.640625 -0.15625l0 1.328125q-0.65625 0.1875 -1.484375 0.296875q-0.828125 0.125 -1.71875 0.125q-1.203125 0 -2.0625 -0.328125q-0.859375 -0.328125 -1.421875 -0.9375q-0.546875 -0.625 -0.8125 -1.515625q-0.265625 -0.890625 -0.265625 -2.03125q0 -0.984375 0.28125 -1.859375q0.296875 -0.875 0.828125 -1.53125q0.546875 -0.671875 1.328125 -1.0625q0.796875 -0.390625 1.796875 -0.390625q0.984375 0 1.734375 0.3125q0.75 0.296875 1.265625 0.859375q0.515625 0.5625 0.78125 1.375q0.265625 0.796875 0.265625 1.78125zm-1.6875 -0.21875q0.03125 -0.625 -0.125 -1.140625q-0.140625 -0.515625 -0.453125 -0.890625q-0.3125 -0.375 -0.78125 -0.578125q-0.453125 -0.203125 -1.078125 -0.203125q-0.515625 0 -0.953125 0.203125q-0.4375 0.203125 -0.765625 0.578125q-0.3125 0.359375 -0.5 0.890625q-0.1875 0.515625 -0.234375 1.140625l4.890625 0zm12.460419 5.375l-2.140625 0l-2.515625 -3.546875l-2.484375 3.546875l-2.078125 0l3.609375 -4.671875l-3.453125 -4.640625l2.078125 0l2.4375 3.578125l2.40625 -3.578125l2.0 0l-3.5 4.671875l3.640625 4.640625zm9.819794 -4.828125q0 1.25 -0.34375 2.1875q-0.34375 0.921875 -0.953125 1.53125q-0.609375 0.609375 -1.453125 0.921875q-0.828125 0.296875 -1.8125 0.296875q-0.4375 0 -0.890625 -0.046875q-0.4375 -0.046875 -0.890625 -0.15625l0 3.890625l-1.609375 0l0 -13.109375l1.4375 0l0.109375 1.5625q0.6875 -0.96875 1.46875 -1.34375q0.796875 -0.390625 1.71875 -0.390625q0.796875 0 1.390625 0.34375q0.609375 0.328125 1.015625 0.9375q0.40625 0.609375 0.609375 1.46875q0.203125 0.859375 0.203125 1.90625zm-1.640625 0.078125q0 -0.734375 -0.109375 -1.34375q-0.109375 -0.609375 -0.34375 -1.046875q-0.234375 -0.4375 -0.59375 -0.6875q-0.359375 -0.25 -0.859375 -0.25q-0.3125 0 -0.625 0.109375q-0.3125 0.09375 -0.65625 0.328125q-0.328125 0.21875 -0.703125 0.59375q-0.375 0.375 -0.8125 0.9375l0 4.515625q0.453125 0.1875 0.9375 0.296875q0.5 0.09375 0.96875 0.09375q1.3125 0 2.046875 -0.875q0.75 -0.890625 0.75 -2.671875zm21.936462 -1.25l-7.984375 0l0 -1.359375l7.984375 0l0 1.359375zm0 3.234375l-7.984375 0l0 -1.359375l7.984375 0l0 1.359375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m210.85876 115.059105l-0.03125 -1.25q-0.765625 0.75 -1.546875 1.09375q-0.78125 0.328125 -1.65625 0.328125q-0.796875 0 -1.359375 -0.203125q-0.5625 -0.203125 -0.9375 -0.5625q-0.359375 -0.359375 -0.53125 -0.84375q-0.171875 -0.484375 -0.171875 -1.046875q0 -1.40625 1.046875 -2.1875q1.046875 -0.796875 3.078125 -0.796875l1.9375 0l0 -0.828125q0 -0.8125 -0.53125 -1.3125q-0.53125 -0.5 -1.609375 -0.5q-0.796875 0 -1.5625 0.1875q-0.765625 0.171875 -1.578125 0.484375l0 -1.453125q0.296875 -0.109375 0.671875 -0.21875q0.390625 -0.109375 0.796875 -0.1875q0.421875 -0.078125 0.875 -0.125q0.453125 -0.0625 0.921875 -0.0625q0.84375 0 1.515625 0.1875q0.6875 0.1875 1.15625 0.578125q0.46875 0.375 0.71875 0.953125q0.25 0.5625 0.25 1.34375l0 6.421875l-1.453125 0zm-0.171875 -4.234375l-2.0625 0q-0.59375 0 -1.03125 0.125q-0.4375 0.109375 -0.71875 0.34375q-0.28125 0.21875 -0.421875 0.53125q-0.125 0.296875 -0.125 0.6875q0 0.28125 0.078125 0.53125q0.09375 0.234375 0.28125 0.421875q0.1875 0.1875 0.484375 0.3125q0.296875 0.109375 0.71875 0.109375q0.5625 0 1.28125 -0.34375q0.71875 -0.34375 1.515625 -1.078125l0 -1.640625z" fill-rule="nonzero"></path><path fill="#980000" d="m230.9671 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m253.85669 110.23098q0 1.15625 -0.328125 2.078125q-0.3125 0.90625 -0.90625 1.546875q-0.578125 0.640625 -1.421875 0.984375q-0.84375 0.328125 -1.90625 0.328125q-0.828125 0 -1.6875 -0.15625q-0.859375 -0.15625 -1.703125 -0.5l0 -12.5625l1.609375 0l0 3.609375l-0.0625 1.71875q0.6875 -0.9375 1.484375 -1.3125q0.796875 -0.390625 1.703125 -0.390625q0.796875 0 1.390625 0.34375q0.609375 0.328125 1.015625 0.9375q0.40625 0.609375 0.609375 1.46875q0.203125 0.859375 0.203125 1.90625zm-1.640625 0.078125q0 -0.734375 -0.109375 -1.34375q-0.109375 -0.609375 -0.34375 -1.046875q-0.234375 -0.4375 -0.59375 -0.6875q-0.359375 -0.25 -0.859375 -0.25q-0.3125 0 -0.625 0.109375q-0.3125 0.09375 -0.65625 0.328125q-0.328125 0.21875 -0.703125 0.59375q-0.375 0.375 -0.8125 0.9375l0 4.515625q0.484375 0.1875 0.96875 0.296875q0.5 0.09375 0.9375 0.09375q0.5625 0 1.0625 -0.171875q0.5 -0.171875 0.890625 -0.578125q0.390625 -0.421875 0.609375 -1.09375q0.234375 -0.6875 0.234375 -1.703125z" fill-rule="nonzero"></path><path fill="#980000" d="m271.99628 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m294.1671 114.715355q-0.625 0.234375 -1.296875 0.34375q-0.65625 0.125 -1.359375 0.125q-2.21875 0 -3.40625 -1.1875q-1.1875 -1.203125 -1.1875 -3.5q0 -1.109375 0.34375 -2.0q0.34375 -0.90625 0.953125 -1.546875q0.625 -0.640625 1.484375 -0.984375q0.875 -0.34375 1.90625 -0.34375q0.734375 0 1.359375 0.109375q0.625 0.09375 1.203125 0.3125l0 1.546875q-0.59375 -0.3125 -1.234375 -0.453125q-0.625 -0.15625 -1.28125 -0.15625q-0.625 0 -1.1875 0.25q-0.546875 0.234375 -0.96875 0.6875q-0.40625 0.4375 -0.65625 1.078125q-0.234375 0.640625 -0.234375 1.4375q0 1.6875 0.8125 2.53125q0.828125 0.84375 2.28125 0.84375q0.65625 0 1.265625 -0.140625q0.625 -0.15625 1.203125 -0.453125l0 1.5z" fill-rule="nonzero"></path><path fill="#980000" d="m313.02545 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375zm6.5854187 -17.703125q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#980000" d="m340.12546 101.246605q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#000000" d="m349.19525 116.965355q0.484375 0.015625 0.921875 -0.09375q0.453125 -0.09375 0.78125 -0.296875q0.34375 -0.203125 0.546875 -0.5q0.203125 -0.296875 0.203125 -0.671875q0 -0.390625 -0.140625 -0.625q-0.125 -0.25 -0.296875 -0.453125q-0.171875 -0.203125 -0.3125 -0.4375q-0.125 -0.234375 -0.125 -0.625q0 -0.1875 0.078125 -0.390625q0.078125 -0.21875 0.21875 -0.390625q0.15625 -0.1875 0.390625 -0.296875q0.25 -0.109375 0.5625 -0.109375q0.328125 0 0.625 0.140625q0.3125 0.125 0.53125 0.40625q0.234375 0.28125 0.359375 0.703125q0.140625 0.40625 0.140625 0.96875q0 0.78125 -0.28125 1.484375q-0.28125 0.703125 -0.84375 1.25q-0.5625 0.5625 -1.40625 0.875q-0.828125 0.328125 -1.953125 0.328125l0 -1.265625z" fill-rule="nonzero"></path><path fill="#0000ff" d="m368.52234 110.590355q0 -1.1875 0.3125 -2.109375q0.328125 -0.921875 0.921875 -1.546875q0.609375 -0.640625 1.4375 -0.96875q0.84375 -0.328125 1.875 -0.328125q0.453125 0 0.875 0.0625q0.4375 0.046875 0.859375 0.171875l0 -3.921875l1.625 0l0 13.109375l-1.453125 0l-0.0625 -1.765625q-0.671875 0.984375 -1.46875 1.46875q-0.78125 0.46875 -1.703125 0.46875q-0.796875 0 -1.40625 -0.328125q-0.59375 -0.34375 -1.0 -0.953125q-0.40625 -0.609375 -0.609375 -1.453125q-0.203125 -0.859375 -0.203125 -1.90625zm1.640625 -0.09375q0 1.6875 0.5 2.515625q0.5 0.828125 1.40625 0.828125q0.609375 0 1.296875 -0.546875q0.6875 -0.546875 1.4375 -1.625l0 -4.3125q-0.40625 -0.1875 -0.890625 -0.28125q-0.484375 -0.109375 -0.953125 -0.109375q-1.3125 0 -2.0625 0.859375q-0.734375 0.84375 -0.734375 2.671875z" fill-rule="nonzero"></path><path fill="#980000" d="m395.0838 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375z" fill-rule="nonzero"></path><path fill="#0000ff" d="m417.89526 109.902855q0 0.34375 -0.015625 0.578125q-0.015625 0.234375 -0.03125 0.4375l-6.53125 0q0 1.4375 0.796875 2.203125q0.796875 0.765625 2.296875 0.765625q0.40625 0 0.8125 -0.03125q0.40625 -0.046875 0.78125 -0.09375q0.390625 -0.0625 0.734375 -0.125q0.34375 -0.078125 0.640625 -0.15625l0 1.328125q-0.65625 0.1875 -1.484375 0.296875q-0.828125 0.125 -1.71875 0.125q-1.203125 0 -2.0625 -0.328125q-0.859375 -0.328125 -1.421875 -0.9375q-0.546875 -0.625 -0.8125 -1.515625q-0.265625 -0.890625 -0.265625 -2.03125q0 -0.984375 0.28125 -1.859375q0.296875 -0.875 0.828125 -1.53125q0.546875 -0.671875 1.328125 -1.0625q0.796875 -0.390625 1.796875 -0.390625q0.984375 0 1.734375 0.3125q0.75 0.296875 1.265625 0.859375q0.515625 0.5625 0.78125 1.375q0.265625 0.796875 0.265625 1.78125zm-1.6875 -0.21875q0.03125 -0.625 -0.125 -1.140625q-0.140625 -0.515625 -0.453125 -0.890625q-0.3125 -0.375 -0.78125 -0.578125q-0.453125 -0.203125 -1.078125 -0.203125q-0.515625 0 -0.953125 0.203125q-0.4375 0.203125 -0.765625 0.578125q-0.3125 0.359375 -0.5 0.890625q-0.1875 0.515625 -0.234375 1.140625l4.890625 0z" fill-rule="nonzero"></path><path fill="#980000" d="m436.11298 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375zm6.5854187 -17.703125q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#000000" d="m451.7682 116.965355q0.484375 0.015625 0.921875 -0.09375q0.453125 -0.09375 0.78125 -0.296875q0.34375 -0.203125 0.546875 -0.5q0.203125 -0.296875 0.203125 -0.671875q0 -0.390625 -0.140625 -0.625q-0.125 -0.25 -0.296875 -0.453125q-0.171875 -0.203125 -0.3125 -0.4375q-0.125 -0.234375 -0.125 -0.625q0 -0.1875 0.078125 -0.390625q0.078125 -0.21875 0.21875 -0.390625q0.15625 -0.1875 0.390625 -0.296875q0.25 -0.109375 0.5625 -0.109375q0.328125 0 0.625 0.140625q0.3125 0.125 0.53125 0.40625q0.234375 0.28125 0.359375 0.703125q0.140625 0.40625 0.140625 0.96875q0 0.78125 -0.28125 1.484375q-0.28125 0.703125 -0.84375 1.25q-0.5625 0.5625 -1.40625 0.875q-0.828125 0.328125 -1.953125 0.328125l0 -1.265625z" fill-rule="nonzero"></path><path fill="#0000ff" d="m479.82965 103.44973q-1.265625 -0.265625 -2.1875 -0.265625q-2.1875 0 -2.1875 2.28125l0 1.640625l4.09375 0l0 1.34375l-4.09375 0l0 6.609375l-1.640625 0l0 -6.609375l-2.984375 0l0 -1.34375l2.984375 0l0 -1.546875q0 -3.71875 3.875 -3.71875q0.96875 0 2.140625 0.21875l0 1.390625zm-9.75 2.296875l0 0z" fill-rule="nonzero"></path><path fill="#980000" d="m497.65674 118.94973q-4.28125 -3.953125 -4.28125 -8.75q0 -1.125 0.21875 -2.234375q0.234375 -1.125 0.734375 -2.25q0.515625 -1.125 1.34375 -2.234375q0.828125 -1.125 2.015625 -2.234375l0.9375 0.953125q-3.59375 3.546875 -3.59375 7.875q0 2.15625 0.90625 4.140625q0.90625 1.984375 2.6875 3.75l-0.96875 0.984375zm6.5854187 -17.703125q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#980000" d="m524.7567 101.246605q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375zm20.514648 0q4.265625 3.953125 4.265625 8.8125q0 1.0 -0.203125 2.078125q-0.203125 1.078125 -0.703125 2.203125q-0.484375 1.125 -1.3125 2.28125q-0.828125 1.171875 -2.09375 2.328125l-0.9375 -0.953125q1.8125 -1.78125 2.703125 -3.734375q0.890625 -1.953125 0.890625 -4.078125q0 -4.421875 -3.59375 -7.953125l0.984375 -0.984375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m349.9367 132.14035l22.677155 2.9606323l-2.551178 14.992126l-22.677155 -2.9606323z" fill-rule="nonzero"></path><path fill="#e06666" d="m360.83054 154.24348l1.5900879 0.4282074q-0.5466919 1.6304474 -1.8093567 2.442566q-1.2600403 0.79670715 -2.8713684 0.5863342q-1.9986572 -0.2609253 -3.0023193 -1.7156067q-0.98550415 -1.4680481 -0.5710449 -3.9036407q0.2675476 -1.5723419 0.97821045 -2.6771545q0.72875977 -1.1181793 1.8974915 -1.5643921q1.1868286 -0.45959473 2.441803 -0.29574585q1.5958557 0.2083435 2.4665222 1.1414032q0.8706665 0.93307495 0.913208 2.451355l-1.6532898 0.03627014q-0.06451416 -1.0169067 -0.56933594 -1.5870514q-0.50479126 -0.57014465 -1.3259583 -0.6773529q-1.2549744 -0.16384888 -2.1972961 0.6270752q-0.92681885 0.79293823 -1.2546997 2.719818q-0.3331604 1.9577332 0.27389526 2.9509277q0.60964966 0.97776794 1.8181458 1.1355438q0.97610474 0.1274414 1.7265015 -0.37338257q0.75302124 -0.51623535 1.1488037 -1.725174z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m7.0 5.0l689.98425 0l0 49.007874l-689.98425 0z" fill-rule="nonzero"></path><path fill="#434343" d="m284.5312 31.919998l-2.6875 0l-1.203125 -3.734375l-0.421875 -1.53125l-0.421875 1.5625l-1.1875 3.703125l-2.5625 0l-0.671875 -12.125l2.0 0l0.296875 7.78125l0.078125 2.109375l0.546875 -1.890625l1.328125 -4.3125l1.4375 0l1.40625 4.578125l0.46875 1.609375l0.03125 -1.890625l0.296875 -7.984375l1.953125 0l-0.6875 12.125zm7.6322937 -12.0q0 0.296875 -0.125 0.578125q-0.109375 0.265625 -0.3125 0.46875q-0.1875 0.1875 -0.46875 0.3125q-0.265625 0.109375 -0.578125 0.109375q-0.3125 0 -0.59375 -0.109375q-0.265625 -0.125 -0.46875 -0.3125q-0.203125 -0.203125 -0.3125 -0.46875q-0.109375 -0.28125 -0.109375 -0.578125q0 -0.3125 0.109375 -0.578125q0.109375 -0.265625 0.3125 -0.46875q0.203125 -0.203125 0.46875 -0.3125q0.28125 -0.125 0.59375 -0.125q0.3125 0 0.578125 0.125q0.28125 0.109375 0.46875 0.3125q0.203125 0.203125 0.3125 0.46875q0.125 0.265625 0.125 0.578125zm-2.515625 4.34375l-2.65625 0l0 -1.765625l4.984375 0l0 7.65625l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -5.890625zm14.991669 7.5625q-0.609375 0.140625 -1.234375 0.21875q-0.625 0.09375 -1.171875 0.09375q-0.9375 0 -1.609375 -0.203125q-0.671875 -0.1875 -1.109375 -0.578125q-0.4375 -0.40625 -0.65625 -1.015625q-0.203125 -0.625 -0.203125 -1.484375l0 -4.59375l-2.53125 0l0 -1.765625l2.53125 0l0 -2.421875l2.328125 -0.59375l0 3.015625l3.65625 0l0 1.765625l-3.65625 0l0 4.421875q0 0.8125 0.359375 1.234375q0.375 0.40625 1.25 0.40625q0.5625 0 1.078125 -0.09375q0.53125 -0.09375 0.96875 -0.21875l0 1.8125zm8.101044 0.09375l0 -6.140625q0 -1.546875 -1.140625 -1.546875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 5.96875l-2.25 0l0 -13.109375l2.25 0l0 3.234375l-0.109375 1.703125q0.296875 -0.34375 0.59375 -0.609375q0.296875 -0.28125 0.640625 -0.46875q0.34375 -0.1875 0.734375 -0.28125q0.40625 -0.09375 0.890625 -0.09375q0.71875 0 1.25 0.234375q0.546875 0.234375 0.90625 0.671875q0.359375 0.421875 0.53125 1.03125q0.1875 0.609375 0.1875 1.359375l0 6.328125l-2.265625 0zm23.280212 -8.265625q0 0.953125 -0.328125 1.75q-0.3125 0.796875 -0.9375 1.390625q-0.609375 0.578125 -1.546875 0.90625q-0.921875 0.328125 -2.140625 0.328125l-1.171875 0l0 3.890625l-2.296875 0l0 -12.125l3.5625 0q1.171875 0 2.078125 0.265625q0.90625 0.25 1.515625 0.75q0.625 0.484375 0.9375 1.203125q0.328125 0.71875 0.328125 1.640625zm-2.390625 0.15625q0 -0.484375 -0.15625 -0.875q-0.15625 -0.390625 -0.46875 -0.671875q-0.3125 -0.28125 -0.78125 -0.421875q-0.46875 -0.15625 -1.125 -0.15625l-1.203125 0l0 4.4375l1.28125 0q0.59375 0 1.046875 -0.15625q0.453125 -0.15625 0.765625 -0.453125q0.3125 -0.3125 0.46875 -0.734375q0.171875 -0.4375 0.171875 -0.96875zm9.819794 -3.890625q0 0.296875 -0.125 0.578125q-0.109375 0.265625 -0.3125 0.46875q-0.1875 0.1875 -0.46875 0.3125q-0.265625 0.109375 -0.578125 0.109375q-0.3125 0 -0.59375 -0.109375q-0.265625 -0.125 -0.46875 -0.3125q-0.203125 -0.203125 -0.3125 -0.46875q-0.109375 -0.28125 -0.109375 -0.578125q0 -0.3125 0.109375 -0.578125q0.109375 -0.265625 0.3125 -0.46875q0.203125 -0.203125 0.46875 -0.3125q0.28125 -0.125 0.59375 -0.125q0.3125 0 0.578125 0.125q0.28125 0.109375 0.46875 0.3125q0.203125 0.203125 0.3125 0.46875q0.125 0.265625 0.125 0.578125zm-2.515625 4.34375l-2.65625 0l0 -1.765625l4.984375 0l0 7.65625l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -5.890625zm15.694794 2.78125q0 1.296875 -0.375 2.25q-0.359375 0.9375 -1.015625 1.5625q-0.640625 0.625 -1.53125 0.9375q-0.890625 0.296875 -1.9375 0.296875q-0.359375 0 -0.71875 -0.046875q-0.34375 -0.046875 -0.640625 -0.125l0 3.6875l-2.25 0l0 -13.109375l1.953125 0l0.046875 1.390625q0.296875 -0.359375 0.609375 -0.65625q0.3125 -0.296875 0.671875 -0.5q0.359375 -0.21875 0.765625 -0.328125q0.421875 -0.109375 0.953125 -0.109375q0.828125 0 1.46875 0.328125q0.65625 0.328125 1.09375 0.953125q0.453125 0.609375 0.671875 1.5q0.234375 0.875 0.234375 1.96875zm-2.375 0.09375q0 -0.78125 -0.109375 -1.328125q-0.109375 -0.546875 -0.328125 -0.890625q-0.203125 -0.359375 -0.5 -0.515625q-0.296875 -0.171875 -0.6875 -0.171875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 4.125q0.28125 0.09375 0.671875 0.171875q0.390625 0.0625 0.796875 0.0625q0.546875 0 0.984375 -0.21875q0.4375 -0.234375 0.75 -0.640625q0.3125 -0.40625 0.46875 -0.984375q0.171875 -0.59375 0.171875 -1.328125zm12.366669 -0.65625q0 0.234375 -0.015625 0.609375q-0.015625 0.359375 -0.046875 0.6875l-6.1875 0q0 0.625 0.1875 1.109375q0.1875 0.46875 0.53125 0.78125q0.359375 0.3125 0.84375 0.484375q0.484375 0.171875 1.078125 0.171875q0.6875 0 1.46875 -0.109375q0.78125 -0.109375 1.625 -0.34375l0 1.796875q-0.359375 0.09375 -0.78125 0.1875q-0.421875 0.078125 -0.875 0.140625q-0.4375 0.078125 -0.90625 0.109375q-0.453125 0.03125 -0.875 0.03125q-1.078125 0 -1.9375 -0.3125q-0.84375 -0.3125 -1.4375 -0.90625q-0.59375 -0.59375 -0.90625 -1.46875q-0.3125 -0.890625 -0.3125 -2.046875q0 -1.15625 0.3125 -2.09375q0.3125 -0.9375 0.890625 -1.609375q0.578125 -0.671875 1.390625 -1.03125q0.828125 -0.375 1.828125 -0.375q1.015625 0 1.78125 0.3125q0.765625 0.296875 1.28125 0.859375q0.53125 0.5625 0.796875 1.328125q0.265625 0.765625 0.265625 1.6875zm-2.296875 -0.328125q0 -0.546875 -0.15625 -0.953125q-0.140625 -0.421875 -0.390625 -0.6875q-0.25 -0.28125 -0.59375 -0.40625q-0.34375 -0.125 -0.734375 -0.125q-0.84375 0 -1.390625 0.578125q-0.546875 0.5625 -0.65625 1.59375l3.921875 0zm7.3822937 -5.578125l-2.65625 0l0 -1.765625l4.984375 0l0 11.34375l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -9.578125zm12.772919 -0.65625q0 0.296875 -0.125 0.578125q-0.109375 0.265625 -0.3125 0.46875q-0.1875 0.1875 -0.46875 0.3125q-0.265625 0.109375 -0.578125 0.109375q-0.3125 0 -0.59375 -0.109375q-0.265625 -0.125 -0.46875 -0.3125q-0.203125 -0.203125 -0.3125 -0.46875q-0.109375 -0.28125 -0.109375 -0.578125q0 -0.3125 0.109375 -0.578125q0.109375 -0.265625 0.3125 -0.46875q0.203125 -0.203125 0.46875 -0.3125q0.28125 -0.125 0.59375 -0.125q0.3125 0 0.578125 0.125q0.28125 0.109375 0.46875 0.3125q0.203125 0.203125 0.3125 0.46875q0.125 0.265625 0.125 0.578125zm-2.515625 4.34375l-2.65625 0l0 -1.765625l4.984375 0l0 7.65625l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -5.890625zm12.835419 7.65625l0 -6.140625q0 -1.546875 -1.140625 -1.546875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 5.96875l-2.25 0l0 -9.421875l1.953125 0l0.046875 1.390625q0.296875 -0.359375 0.609375 -0.65625q0.3125 -0.296875 0.671875 -0.5q0.359375 -0.21875 0.765625 -0.328125q0.421875 -0.109375 0.953125 -0.109375q0.71875 0 1.25 0.234375q0.546875 0.234375 0.90625 0.671875q0.359375 0.421875 0.53125 1.03125q0.1875 0.609375 0.1875 1.359375l0 6.328125l-2.265625 0zm10.194794 -12.0q0 0.296875 -0.125 0.578125q-0.109375 0.265625 -0.3125 0.46875q-0.1875 0.1875 -0.46875 0.3125q-0.265625 0.109375 -0.578125 0.109375q-0.3125 0 -0.59375 -0.109375q-0.265625 -0.125 -0.46875 -0.3125q-0.203125 -0.203125 -0.3125 -0.46875q-0.109375 -0.28125 -0.109375 -0.578125q0 -0.3125 0.109375 -0.578125q0.109375 -0.265625 0.3125 -0.46875q0.203125 -0.203125 0.46875 -0.3125q0.28125 -0.125 0.59375 -0.125q0.3125 0 0.578125 0.125q0.28125 0.109375 0.46875 0.3125q0.203125 0.203125 0.3125 0.46875q0.125 0.265625 0.125 0.578125zm-2.515625 4.34375l-2.65625 0l0 -1.765625l4.984375 0l0 7.65625l2.71875 0l0 1.765625l-8.03125 0l0 -1.765625l2.984375 0l0 -5.890625zm12.835419 7.65625l0 -6.140625q0 -1.546875 -1.140625 -1.546875q-0.578125 0 -1.109375 0.46875q-0.515625 0.453125 -1.109375 1.25l0 5.96875l-2.25 0l0 -9.421875l1.953125 0l0.046875 1.390625q0.296875 -0.359375 0.609375 -0.65625q0.3125 -0.296875 0.671875 -0.5q0.359375 -0.21875 0.765625 -0.328125q0.421875 -0.109375 0.953125 -0.109375q0.71875 0 1.25 0.234375q0.546875 0.234375 0.90625 0.671875q0.359375 0.421875 0.53125 1.03125q0.1875 0.609375 0.1875 1.359375l0 6.328125l-2.265625 0zm11.710419 -7.78125q0.265625 0.328125 0.359375 0.6875q0.109375 0.359375 0.109375 0.734375q0 0.78125 -0.28125 1.390625q-0.265625 0.59375 -0.765625 1.015625q-0.484375 0.40625 -1.1875 0.609375q-0.6875 0.203125 -1.515625 0.203125q-0.5 0 -0.921875 -0.09375q-0.40625 -0.09375 -0.625 -0.21875q-0.15625 0.15625 -0.265625 0.34375q-0.109375 0.1875 -0.109375 0.421875q0 0.15625 0.0625 0.3125q0.078125 0.140625 0.21875 0.265625q0.15625 0.109375 0.34375 0.1875q0.203125 0.0625 0.453125 0.078125l2.234375 0.078125q0.765625 0.015625 1.359375 0.1875q0.609375 0.171875 1.046875 0.5q0.4375 0.3125 0.671875 0.765625q0.234375 0.4375 0.234375 1.015625q0 0.65625 -0.296875 1.234375q-0.296875 0.59375 -0.890625 1.015625q-0.59375 0.4375 -1.484375 0.6875q-0.890625 0.25 -2.078125 0.25q-1.140625 0 -1.96875 -0.1875q-0.8125 -0.171875 -1.34375 -0.5q-0.515625 -0.3125 -0.765625 -0.765625q-0.25 -0.453125 -0.25 -0.984375q0 -0.328125 0.078125 -0.609375q0.09375 -0.28125 0.25 -0.53125q0.171875 -0.25 0.421875 -0.484375q0.25 -0.25 0.59375 -0.5q-0.453125 -0.25 -0.6875 -0.65625q-0.234375 -0.421875 -0.234375 -0.875q0 -0.328125 0.078125 -0.59375q0.09375 -0.28125 0.21875 -0.53125q0.140625 -0.25 0.3125 -0.46875q0.171875 -0.234375 0.375 -0.46875q-0.34375 -0.34375 -0.578125 -0.828125q-0.21875 -0.484375 -0.21875 -1.21875q0 -0.78125 0.28125 -1.390625q0.28125 -0.625 0.78125 -1.046875q0.5 -0.421875 1.1875 -0.640625q0.703125 -0.21875 1.515625 -0.21875q0.421875 0 0.796875 0.046875q0.390625 0.03125 0.703125 0.140625l3.265625 0l0 1.640625l-1.484375 0zm-5.328125 9.03125q0 0.546875 0.546875 0.796875q0.546875 0.265625 1.546875 0.265625q0.640625 0 1.078125 -0.125q0.453125 -0.109375 0.71875 -0.3125q0.28125 -0.1875 0.40625 -0.453125q0.125 -0.25 0.125 -0.53125q0 -0.25 -0.125 -0.421875q-0.109375 -0.171875 -0.328125 -0.296875q-0.203125 -0.109375 -0.484375 -0.171875q-0.28125 -0.0625 -0.625 -0.078125l-2.0 -0.03125q-0.265625 0.1875 -0.4375 0.34375q-0.171875 0.171875 -0.265625 0.328125q-0.09375 0.171875 -0.125 0.328125q-0.03125 0.171875 -0.03125 0.359375zm0.375 -7.5625q0 0.75 0.4375 1.203125q0.4375 0.4375 1.234375 0.4375q0.421875 0 0.71875 -0.140625q0.3125 -0.140625 0.515625 -0.375q0.203125 -0.234375 0.296875 -0.53125q0.109375 -0.3125 0.109375 -0.640625q0 -0.796875 -0.4375 -1.234375q-0.4375 -0.4375 -1.21875 -0.4375q-0.421875 0 -0.734375 0.140625q-0.3125 0.140625 -0.515625 0.375q-0.203125 0.234375 -0.3125 0.546875q-0.09375 0.3125 -0.09375 0.65625z" fill-rule="nonzero"></path><path fill="#d0e0e3" d="m439.77292 141.8203l64.97638 0l0 384.75592l-64.97638 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m439.77292 141.8203l64.97638 0l0 384.75592l-64.97638 0z" fill-rule="nonzero"></path><path fill="#434343" d="m476.38245 282.837q0 0.859375 -0.359375 1.515625q-0.34375 0.640625 -0.984375 1.078125q-0.625 0.421875 -1.515625 0.640625q-0.875 0.21875 -1.953125 0.21875q-0.46875 0 -0.953125 -0.046875q-0.484375 -0.03125 -0.921875 -0.09375q-0.4375 -0.046875 -0.828125 -0.125q-0.390625 -0.078125 -0.703125 -0.15625l0 -1.59375q0.6875 0.25 1.546875 0.40625q0.875 0.140625 1.984375 0.140625q0.796875 0 1.359375 -0.125q0.5625 -0.125 0.921875 -0.359375q0.359375 -0.25 0.515625 -0.59375q0.171875 -0.359375 0.171875 -0.8125q0 -0.5 -0.28125 -0.84375q-0.265625 -0.34375 -0.71875 -0.609375q-0.4375 -0.28125 -1.015625 -0.5q-0.578125 -0.234375 -1.171875 -0.46875q-0.59375 -0.25 -1.171875 -0.53125q-0.5625 -0.28125 -1.015625 -0.671875q-0.4375 -0.390625 -0.71875 -0.90625q-0.265625 -0.515625 -0.265625 -1.234375q0 -0.625 0.265625 -1.21875q0.265625 -0.609375 0.8125 -1.078125q0.546875 -0.46875 1.40625 -0.75q0.859375 -0.296875 2.046875 -0.296875q0.296875 0 0.65625 0.03125q0.359375 0.03125 0.71875 0.078125q0.375 0.046875 0.734375 0.125q0.359375 0.0625 0.65625 0.125l0 1.484375q-0.71875 -0.203125 -1.4375 -0.296875q-0.703125 -0.109375 -1.375 -0.109375q-1.421875 0 -2.09375 0.46875q-0.65625 0.46875 -0.65625 1.265625q0 0.5 0.265625 0.859375q0.28125 0.34375 0.71875 0.625q0.453125 0.28125 1.015625 0.515625q0.578125 0.21875 1.171875 0.46875q0.59375 0.234375 1.15625 0.515625q0.578125 0.28125 1.015625 0.6875q0.453125 0.390625 0.71875 0.921875q0.28125 0.515625 0.28125 1.25z" fill-rule="nonzero"></path><path fill="#434343" d="m475.89807 308.11826l-6.90625 0l0 -12.125l6.90625 0l0 1.390625l-5.25 0l0 3.75l5.03125 0l0 1.40625l-5.03125 0l0 4.171875l5.25 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m476.88245 330.11826l-1.859375 0l-1.8125 -3.875q-0.203125 -0.453125 -0.421875 -0.734375q-0.203125 -0.296875 -0.453125 -0.46875q-0.25 -0.171875 -0.546875 -0.25q-0.28125 -0.078125 -0.640625 -0.078125l-0.78125 0l0 5.40625l-1.65625 0l0 -12.125l3.25 0q1.046875 0 1.8125 0.234375q0.765625 0.234375 1.25 0.65625q0.484375 0.40625 0.703125 1.0q0.234375 0.578125 0.234375 1.296875q0 0.5625 -0.171875 1.078125q-0.15625 0.5 -0.484375 0.921875q-0.328125 0.40625 -0.828125 0.71875q-0.484375 0.296875 -1.109375 0.4375q0.515625 0.171875 0.859375 0.625q0.359375 0.4375 0.734375 1.171875l1.921875 3.984375zm-2.640625 -8.796875q0 -0.96875 -0.609375 -1.453125q-0.609375 -0.484375 -1.71875 -0.484375l-1.546875 0l0 4.015625l1.328125 0q0.59375 0 1.0625 -0.140625q0.46875 -0.140625 0.796875 -0.40625q0.328125 -0.265625 0.5 -0.640625q0.1875 -0.390625 0.1875 -0.890625z" fill-rule="nonzero"></path><path fill="#434343" d="m477.5387 339.99326l-4.109375 12.125l-2.234375 0l-4.03125 -12.125l1.875 0l2.609375 8.171875l0.75 2.390625l0.75 -2.390625l2.625 -8.171875l1.765625 0z" fill-rule="nonzero"></path><path fill="#434343" d="m475.89807 374.11826l-6.90625 0l0 -12.125l6.90625 0l0 1.390625l-5.25 0l0 3.75l5.03125 0l0 1.40625l-5.03125 0l0 4.171875l5.25 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m476.88245 396.11826l-1.859375 0l-1.8125 -3.875q-0.203125 -0.453125 -0.421875 -0.734375q-0.203125 -0.296875 -0.453125 -0.46875q-0.25 -0.171875 -0.546875 -0.25q-0.28125 -0.078125 -0.640625 -0.078125l-0.78125 0l0 5.40625l-1.65625 0l0 -12.125l3.25 0q1.046875 0 1.8125 0.234375q0.765625 0.234375 1.25 0.65625q0.484375 0.40625 0.703125 1.0q0.234375 0.578125 0.234375 1.296875q0 0.5625 -0.171875 1.078125q-0.15625 0.5 -0.484375 0.921875q-0.328125 0.40625 -0.828125 0.71875q-0.484375 0.296875 -1.109375 0.4375q0.515625 0.171875 0.859375 0.625q0.359375 0.4375 0.734375 1.171875l1.921875 3.984375zm-2.640625 -8.796875q0 -0.96875 -0.609375 -1.453125q-0.609375 -0.484375 -1.71875 -0.484375l-1.546875 0l0 4.015625l1.328125 0q0.59375 0 1.0625 -0.140625q0.46875 -0.140625 0.796875 -0.40625q0.328125 -0.265625 0.5 -0.640625q0.1875 -0.390625 0.1875 -0.890625z" fill-rule="nonzero"></path><path fill="#d0e0e3" d="m215.25197 141.8203l64.976364 0l0 384.75592l-64.976364 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m215.25197 141.8203l64.976364 0l0 384.75592l-64.976364 0z" fill-rule="nonzero"></path><path fill="#434343" d="m251.84589 285.66513q-1.453125 0.609375 -3.0625 0.609375q-2.5625 0 -3.9375 -1.53125q-1.375 -1.546875 -1.375 -4.546875q0 -1.46875 0.375 -2.640625q0.375 -1.171875 1.078125 -2.0q0.71875 -0.828125 1.71875 -1.265625q1.0 -0.453125 2.234375 -0.453125q0.84375 0 1.5625 0.15625q0.734375 0.140625 1.40625 0.4375l0 1.625q-0.65625 -0.359375 -1.375 -0.546875q-0.703125 -0.203125 -1.53125 -0.203125q-0.859375 0 -1.546875 0.328125q-0.6875 0.3125 -1.171875 0.921875q-0.484375 0.609375 -0.75 1.484375q-0.25 0.875 -0.25 2.0q0 2.359375 0.953125 3.5625q0.953125 1.1875 2.796875 1.1875q0.78125 0 1.5 -0.171875q0.71875 -0.1875 1.375 -0.515625l0 1.5625z" fill-rule="nonzero"></path><path fill="#434343" d="m251.75214 308.11826l-6.984375 0l0 -12.125l1.6875 0l0 10.71875l5.296875 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m247.00214 319.38388l-2.796875 0l0 -1.390625l7.25 0l0 1.390625l-2.78125 0l0 9.328125l2.78125 0l0 1.40625l-7.25 0l0 -1.40625l2.796875 0l0 -9.328125z" fill-rule="nonzero"></path><path fill="#434343" d="m251.37714 352.11826l-6.90625 0l0 -12.125l6.90625 0l0 1.390625l-5.25 0l0 3.75l5.03125 0l0 1.40625l-5.03125 0l0 4.171875l5.25 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#434343" d="m251.97089 374.11826l-2.15625 0l-3.53125 -7.5625l-1.03125 -2.421875l0 6.109375l0 3.875l-1.53125 0l0 -12.125l2.125 0l3.359375 7.15625l1.21875 2.78125l0 -6.5l0 -3.4375l1.546875 0l0 12.125z" fill-rule="nonzero"></path><path fill="#434343" d="m252.26776 385.3995l-3.59375 0l0 10.71875l-1.671875 0l0 -10.71875l-3.59375 0l0 -1.40625l8.859375 0l0 1.40625z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m282.13055 151.96812l159.52756 23.685043" fill-rule="nonzero"></path><path stroke="#e06666" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m282.13055 151.96812l156.1376 23.181747" fill-rule="evenodd"></path><path fill="#e06666" stroke="#e06666" stroke-width="1.0" stroke-linecap="butt" d="m438.26816 175.14987l-1.2775269 0.9472351l3.221405 -0.6586304l-2.8911133 -1.5661469z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m281.175 287.1842l160.53543 -11.842499" fill-rule="nonzero"></path><path stroke="#e06666" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m284.59277 286.9321l157.11765 -11.590393" fill-rule="evenodd"></path><path fill="#e06666" stroke="#e06666" stroke-width="1.0" stroke-linecap="butt" d="m284.5928 286.9321l1.0387878 -1.2042847l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m282.13196 195.7803l159.52756 23.685043" fill-rule="nonzero"></path><path stroke="#bf9000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m282.13196 195.7803l156.1376 23.181747" fill-rule="evenodd"></path><path fill="#bf9000" stroke="#bf9000" stroke-width="1.0" stroke-linecap="butt" d="m438.26956 218.96205l-1.2775574 0.94721985l3.2214355 -0.6586151l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m348.98535 176.09427l22.677155 2.960617l-2.551178 14.992126l-22.677155 -2.9606323z" fill-rule="nonzero"></path><path fill="#bf9000" d="m359.59622 198.60167l1.667572 0.43832397q-0.6546631 1.4272461 -1.8963623 2.1160583q-1.2235718 0.6753998 -2.9278564 0.45289612q-2.138092 -0.2791443 -3.170105 -1.7532654q-1.0320129 -1.4741364 -0.62805176 -3.8480682q0.41708374 -2.451004 1.9028931 -3.643692q1.5013123 -1.1906586 3.5309753 -0.92567444q1.9676819 0.2568817 2.9815674 1.7444153q1.0138855 1.4875183 0.6020508 3.9076996q-0.023620605 0.13873291 -0.08895874 0.42959595l-7.281952 -0.95069885q-0.17984009 1.6153107 0.4970398 2.570343q0.6768799 0.9550476 1.9008789 1.1148376q0.8986206 0.11732483 1.6125488 -0.26219177q0.73205566 -0.39291382 1.29776 -1.3905792zm-4.9844055 -3.3768005l5.453705 0.7120056q0.0987854 -1.2319489 -0.30758667 -1.9153137q-0.62753296 -1.0588989 -1.8980103 -1.224762q-1.131012 -0.1476593 -2.0523376 0.519928q-0.90322876 0.6542053 -1.1957703 1.9081421z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m281.1764 308.67355l160.53543 -11.842529" fill-rule="nonzero"></path><path stroke="#bf9000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m284.59418 308.42142l157.11765 -11.590393" fill-rule="evenodd"></path><path fill="#bf9000" stroke="#bf9000" stroke-width="1.0" stroke-linecap="butt" d="m284.59418 308.42142l1.0388184 -1.2042542l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m281.17642 226.63377l159.52756 23.685028" fill-rule="nonzero"></path><path stroke="#134f5c" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m281.17642 226.63377l156.13763 23.181747" fill-rule="evenodd"></path><path fill="#134f5c" stroke="#134f5c" stroke-width="1.0" stroke-linecap="butt" d="m437.31406 249.8155l-1.2775574 0.9472351l3.2214355 -0.6586151l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m348.98257 205.9346l22.677185 2.960617l-2.5512085 14.992126l-22.677155 -2.960617z" fill-rule="nonzero"></path><path fill="#134f5c" d="m355.34494 231.08614l1.4348755 -8.432083l-1.4718933 -0.19216919l0.22033691 -1.2948608l1.4718933 0.19215393l0.17575073 -1.0328064q0.16525269 -0.9711609 0.4170227 -1.4267731q0.3425598 -0.61709595 1.0123901 -0.923584q0.67245483 -0.3218994 1.7569885 -0.18031311q0.6972046 0.09101868 1.5231323 0.35643005l-0.49447632 1.4166565q-0.49554443 -0.15924072 -0.96035767 -0.21992493q-0.7591858 -0.099121094 -1.1241455 0.18414307q-0.3623352 0.26785278 -0.5118408 1.1465149l-0.15216064 0.8940735l1.9057007 0.24880981l-0.22033691 1.2948608l-1.9057007 -0.24879456l-1.4348755 8.432068l-1.6423035 -0.21440125z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m280.22348 326.75668l160.53543 -11.842529" fill-rule="nonzero"></path><path stroke="#134f5c" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m283.6413 326.50452l157.11765 -11.590363" fill-rule="evenodd"></path><path fill="#134f5c" stroke="#134f5c" stroke-width="1.0" stroke-linecap="butt" d="m283.6413 326.50455l1.0387878 -1.2042847l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m282.13055 351.51535l159.52756 23.685028" fill-rule="nonzero"></path><path stroke="#a64d79" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m282.13055 351.51535l156.1376 23.181702" fill-rule="evenodd"></path><path fill="#a64d79" stroke="#a64d79" stroke-width="1.0" stroke-linecap="butt" d="m438.26816 374.69708l-1.2775269 0.9472351l3.221405 -0.6586304l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m325.22943 326.89877l72.09448 9.480286l-2.551178 14.992126l-72.09448 -9.480316z" fill-rule="nonzero"></path><path fill="#741b47" d="m343.2176 353.58856l0.20983887 -1.2331543q-1.1760559 1.3267212 -2.9730835 1.0904236q-1.1618958 -0.152771 -2.045807 -0.91516113q-0.8657837 -0.77575684 -1.2138977 -1.9877319q-0.32998657 -1.2253418 -0.07556152 -2.7205505q0.24658203 -1.4489746 0.9288025 -2.572754q0.6976929 -1.1217346 1.7657471 -1.6274414q1.0834961 -0.5036621 2.2918396 -0.34475708q0.8830261 0.116119385 1.501709 0.5757141q0.6341858 0.4616089 0.9656372 1.119812l0.8183899 -4.8093567l1.6421204 0.21594238l-2.282074 13.410706l-1.5336609 -0.20169067zm-4.409912 -5.5441284q-0.3173828 1.8651428 0.31530762 2.893921q0.64816284 1.0307922 1.717102 1.1713562q1.0844116 0.14260864 1.9930115 -0.63619995q0.90859985 -0.7788086 1.2181396 -2.5977478q0.3383789 -1.9884644 -0.2788391 -3.0151978q-0.614563 -1.0421448 -1.7454529 -1.1908569q-1.0999146 -0.1446228 -1.995636 0.6516113q-0.89575195 0.79626465 -1.2236328 2.723114zm10.849762 10.4253845q-1.069458 -1.9057007 -1.6236267 -4.326721q-0.5515137 -2.4364624 -0.13183594 -4.9028015q0.37246704 -2.1888733 1.4079895 -4.085663q1.230011 -2.202179 3.340393 -4.272827l1.1928711 0.15682983q-1.4380493 1.7493286 -1.9333801 2.5194397q-0.7727661 1.1906738 -1.331543 2.519806q-0.6939697 1.6580505 -0.98773193 3.3844604q-0.7502136 4.4085693 1.2597351 9.164337l-1.1928711 -0.15686035zm10.895752 -5.8008423l1.6673584 0.4399109q-0.65460205 1.4268188 -1.8961487 2.114563q-1.2234497 0.67437744 -2.9275208 0.45028687q-2.137848 -0.2810974 -3.1697083 -1.7563477q-1.0318604 -1.4752502 -0.6279297 -3.8490906q0.41708374 -2.4509277 1.9027405 -3.642395q1.5011292 -1.1894531 3.530548 -0.9225769q1.9674377 0.2586975 2.9811707 1.747345q1.0137634 1.488617 0.6019287 3.9086914q-0.023590088 0.13873291 -0.08892822 0.42956543l-7.281067 -0.957428q-0.17984009 1.6153259 0.49691772 2.571106q0.67678833 0.95578 1.9006348 1.1166992q0.89852905 0.11816406 1.6123657 -0.2607727q0.7319641 -0.39227295 1.2976379 -1.3895569zm-4.983795 -3.3817444l5.453064 0.71707153q0.0987854 -1.2320251 -0.30752563 -1.9158325q-0.6274414 -1.0596008 -1.8977356 -1.2266235q-1.1308899 -0.14871216 -2.052124 0.51812744q-0.9031067 0.6534424 -1.1956787 1.9072571zm8.479828 7.0406494l0.32003784 -1.8805542l1.8899536 0.24850464l-0.32000732 1.8805847q-0.17575073 1.0327759 -0.65509033 1.6159058q-0.4819641 0.59851074 -1.3297424 0.83374023l-0.3440857 -0.77020264q0.56344604 -0.1465149 0.8873596 -0.5609436q0.3239441 -0.4144287 0.49658203 -1.2427673l-0.9450073 -0.12426758zm5.1080933 0.6717224l1.4348145 -8.431793l-1.4717102 -0.19351196l0.22033691 -1.2948303l1.4717102 0.19354248l0.17575073 -1.0328064q0.16525269 -0.97109985 0.41696167 -1.4265442q0.3425598 -0.6168518 1.0122986 -0.9227905q0.6723938 -0.32131958 1.7568054 -0.17871094q0.69711304 0.091674805 1.5229187 0.35784912l-0.49441528 1.4163818q-0.49551392 -0.159729 -0.9602356 -0.2208252q-0.75909424 -0.099823 -1.1240234 0.18313599q-0.3623047 0.2675476 -0.5118103 1.1461792l-0.15213013 0.89404297l1.9054565 0.25057983l-0.22033691 1.2948303l-1.9054565 -0.25057983l-1.4348145 8.431793l-1.6421204 -0.21594238zm5.1492004 4.711548l-1.1773682 -0.15481567q3.4895935 -4.032593 4.2397766 -8.441162q0.2911682 -1.711029 0.19241333 -3.45755q-0.09185791 -1.4146729 -0.43447876 -2.7520142q-0.21466064 -0.87924194 -1.0047913 -2.937317l1.1773682 0.15481567q1.3442078 2.5249329 1.7718201 4.9450684q0.37423706 2.0822144 0.001739502 4.2710876q-0.41967773 2.4663086 -1.7736206 4.652191q-1.3358154 2.1725159 -2.992859 3.719696z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m280.22223 430.7826l160.53543 -11.842499" fill-rule="nonzero"></path><path stroke="#741b47" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m283.64005 430.53046l157.11761 -11.590363" fill-rule="evenodd"></path><path fill="#741b47" stroke="#741b47" stroke-width="1.0" stroke-linecap="butt" d="m283.64005 430.5305l1.0387878 -1.2042847l-2.9986572 1.3488464l3.1641235 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m281.17514 482.24542l159.52756 23.685059" fill-rule="nonzero"></path><path stroke="#3d85c6" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m281.17514 482.2454l156.13763 23.181732" fill-rule="evenodd"></path><path fill="#3d85c6" stroke="#3d85c6" stroke-width="1.0" stroke-linecap="butt" d="m437.31277 505.42715l-1.2775269 0.9472351l3.221405 -0.6586304l-2.8911133 -1.5661621z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m279.2668 518.6085l160.53546 -11.84256" fill-rule="nonzero"></path><path stroke="#3c78d8" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m282.68463 518.3563l157.11761 -11.590363" fill-rule="evenodd"></path><path fill="#3c78d8" stroke="#3c78d8" stroke-width="1.0" stroke-linecap="butt" d="m282.68463 518.3563l1.0388184 -1.2042236l-2.9986877 1.3488159l3.164154 0.8942261z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m325.22943 460.4956l72.09448 9.480316l-2.551178 14.992126l-72.09448 -9.480316z" fill-rule="nonzero"></path><path fill="#3d85c6" d="m340.89462 485.6507q-1.0632935 0.6639404 -1.970398 0.87557983q-0.9045105 0.19625854 -1.8804932 0.06793213q-1.5956421 -0.20983887 -2.3320007 -1.094635q-0.73376465 -0.90023804 -0.5265503 -2.117981q0.123291016 -0.7244873 0.5483093 -1.267456q0.4249878 -0.54296875 1.0120239 -0.8282471q0.58706665 -0.28527832 1.284668 -0.3826599q0.51934814 -0.07354736 1.5136719 -0.053100586q2.0558777 0.018188477 3.0559692 -0.1812439q0.05770874 -0.33914185 0.07345581 -0.4316101q0.17047119 -1.0019531 -0.2234497 -1.479248q-0.54071045 -0.63845825 -1.7955322 -0.8034668q-1.1618958 -0.152771 -1.7877808 0.1746521q-0.625885 0.3274536 -1.067627 1.3410034l-1.5872803 -0.44509888q0.4081421 -1.0022278 1.0114136 -1.5690308q0.6187744 -0.5647888 1.62146 -0.77963257q1.020813 -0.22824097 2.2911377 -0.061187744q1.2548218 0.16500854 1.9795532 0.5597229q0.72473145 0.39474487 1.0204773 0.8906555q0.29571533 0.49591064 0.31973267 1.1924744q0.022125244 0.42843628 -0.16409302 1.5228577l-0.37512207 2.2042847q-0.39083862 2.2967834 -0.4028015 2.9255981q0.006134033 0.6154175 0.23703003 1.2131348l-1.7350464 -0.22817993q-0.17681885 -0.54330444 -0.12072754 -1.2451172zm0.48486328 -3.6869812q-0.9588623 0.23635864 -2.815979 0.2600708q-1.046051 0.004272461 -1.4957886 0.13424683q-0.44976807 0.12997437 -0.74246216 0.45394897q-0.2926941 0.3239746 -0.3661499 0.7555847q-0.11279297 0.6628418 0.30688477 1.1750488q0.43777466 0.49884033 1.3982544 0.6251526q0.96047974 0.12628174 1.7749023 -0.19213867q0.8170471 -0.33380127 1.2940063 -0.9960327q0.3604126 -0.53570557 0.54403687 -1.6147156l0.10229492 -0.6011658zm5.7039185 9.764496q-1.0694885 -1.9057007 -1.6236267 -4.3267517q-0.5515442 -2.4364624 -0.13183594 -4.902771q0.37246704 -2.1888733 1.407959 -4.0856934q1.230011 -2.202179 3.3404236 -4.272827l1.1928711 0.15686035q-1.4380493 1.7492981 -1.9333801 2.5194397q-0.77279663 1.1906433 -1.3315735 2.5197754q-0.6939392 1.6580505 -0.98773193 3.384491q-0.7501831 4.4085693 1.2597656 9.164337l-1.1928711 -0.15686035zm5.204529 -3.3500671l-1.5336609 -0.20166016l2.282074 -13.410706l1.6420898 0.21594238l-0.81314087 4.778534q1.2763977 -1.1717224 2.9030151 -0.9578247q0.89852905 0.11816406 1.6411133 0.59402466q0.74523926 0.46047974 1.1436768 1.1905212q0.41656494 0.7166748 0.5534973 1.6802673q0.13696289 0.963562 -0.041412354 2.0117798q-0.42492676 2.4971619 -1.8977051 3.7060852q-1.4701538 1.193512 -3.2052307 0.96533203q-1.7350464 -0.22814941 -2.467102 -1.7900391l-0.20721436 1.2177429zm0.82388306 -4.9346924q-0.29638672 1.7418518 0.034576416 2.5891113q0.5749817 1.3678894 1.9072571 1.5430603q1.0844116 0.14260864 2.0318909 -0.67837524q0.95007324 -0.83639526 1.267456 -2.7015686q0.32263184 -1.8959656 -0.28167725 -2.9052734q-0.6043396 -1.0092773 -1.6732483 -1.1498413q-1.0844421 -0.14257812 -2.0345154 0.69381714q-0.95007324 0.83639526 -1.2517395 2.6090698zm8.363373 6.1428223l0.32000732 -1.8805847l1.8899841 0.24853516l-0.32000732 1.8805542q-0.17575073 1.0328064 -0.65509033 1.6159058q-0.4819641 0.59851074 -1.3297424 0.83374023l-0.3440857 -0.7701721q0.5634155 -0.14654541 0.8873596 -0.5609741q0.3239441 -0.4144287 0.4965515 -1.2427368l-0.9449768 -0.12426758zm11.041382 1.4519043l0.20983887 -1.2331543q-1.1760559 1.3267517 -2.9730835 1.0904236q-1.1618958 -0.152771 -2.045807 -0.91516113q-0.8658142 -0.7757263 -1.2138977 -1.9877319q-0.32998657 -1.2253418 -0.07556152 -2.7205505q0.24658203 -1.4489746 0.9288025 -2.572754q0.6976929 -1.1217346 1.7657166 -1.6274109q1.0835266 -0.5036621 2.29187 -0.3447876q0.8830261 0.116119385 1.501709 0.5757141q0.6341858 0.4616089 0.9656372 1.119812l0.8183899 -4.809326l1.6421204 0.21591187l-2.282074 13.410706l-1.5336609 -0.20169067zm-4.409912 -5.5441284q-0.3173828 1.8651733 0.31530762 2.893921q0.64816284 1.0308228 1.717102 1.1713867q1.0844116 0.14257812 1.9930115 -0.63623047q0.90859985 -0.7788086 1.2181396 -2.5977173q0.3383484 -1.9884949 -0.2788391 -3.0152283q-0.614563 -1.0421448 -1.7454529 -1.1908569q-1.0999146 -0.1446228 -1.995636 0.65164185q-0.89575195 0.79626465 -1.2236328 2.7230835zm8.773895 10.152435l-1.1773682 -0.15481567q3.4895935 -4.032593 4.239807 -8.441162q0.2911377 -1.711029 0.19238281 -3.45755q-0.09185791 -1.4146729 -0.43447876 -2.7520142q-0.21466064 -0.87924194 -1.0047913 -2.937317l1.1773682 0.15481567q1.3442383 2.5249329 1.7718201 4.945099q0.37423706 2.0821838 0.0017700195 4.271057q-0.41970825 2.4663086 -1.7736511 4.6522217q-1.3358154 2.1724854 -2.992859 3.7196655z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m280.20993 463.73132l163.37009 -17.35431" fill-rule="nonzero"></path><path stroke="#93c47d" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m283.61786 463.36935l159.96216 -16.99234" fill-rule="evenodd"></path><path fill="#93c47d" stroke="#93c47d" stroke-width="1.0" stroke-linecap="butt" d="m283.61786 463.36932l0.9994812 -1.2370911l-2.9536743 1.4446716l3.1912537 0.79193115z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m279.435 383.9106l158.6142 19.433075" fill-rule="nonzero"></path><path stroke="#93c47d" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m279.43503 383.91058l155.2125 19.016327" fill-rule="evenodd"></path><path fill="#93c47d" stroke="#93c47d" stroke-width="1.0" stroke-linecap="butt" d="m434.64752 402.9269l-1.2529907 0.97946167l3.2035828 -0.7404785l-2.9300842 -1.4919739z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m327.2065 356.47412l55.62204 7.338562l-2.551178 14.992126l-55.62207 -7.338562z" fill-rule="nonzero"></path><path fill="#93c47d" d="m337.8152 382.1998l-1.5335999 -0.20233154l2.2820435 -13.410461l1.6420288 0.21664429l-0.81314087 4.7784424q1.2763062 -1.1712341 2.902832 -0.9566345q0.898468 0.11853027 1.6410522 0.5947571q0.7451782 0.4607849 1.1435852 1.1910706q0.41653442 0.7168884 0.5534668 1.6805725q0.13693237 0.9636841 -0.041412354 2.0118713q-0.42495728 2.4971313 -1.897644 3.7055054q-1.4700928 1.1929321 -3.2050476 0.9640503q-1.7349854 -0.22891235 -2.4669495 -1.7911987l-0.20721436 1.2177124zm0.82385254 -4.9346313q-0.29638672 1.7418213 0.034576416 2.589264q0.57492065 1.3682251 1.907135 1.5439758q1.0843506 0.1430664 2.0317688 -0.67755127q0.9500122 -0.83602905 1.267395 -2.7011719q0.32263184 -1.8959656 -0.28164673 -2.905548q-0.60427856 -1.009613 -1.6731567 -1.1506348q-1.0843506 -0.1430664 -2.0343628 0.69299316q-0.9500427 0.83602905 -1.251709 2.608673zm10.417755 10.452484q-1.0694275 -1.90625 -1.6235352 -4.327667q-0.55148315 -2.4368286 -0.1317749 -4.9031067q0.37246704 -2.1888428 1.4079285 -4.085327q1.22995 -2.2017822 3.3402405 -4.2716675l1.1927795 0.15737915q-1.4379578 1.7488098 -1.933258 2.5187683q-0.7727661 1.1903992 -1.3315125 2.5193481q-0.6939087 1.6578674 -0.9877014 3.3842773q-0.7501831 4.408478 1.259613 9.165375l-1.1927795 -0.15737915zm10.658783 -6.269043l1.5898132 0.43041992q-0.54663086 1.6300049 -1.8091125 2.4405823q-1.2598572 0.795166 -2.8708801 0.5826111q-1.9983215 -0.26367188 -3.0017395 -1.7199402q-0.98532104 -1.469635 -0.57089233 -3.9050903q0.2675476 -1.5722656 0.9780884 -2.6763q0.7286682 -1.1174316 1.8972168 -1.5621338q1.1866455 -0.45809937 2.4413757 -0.2925415q1.5955505 0.21051025 2.4660645 1.1448975q0.8705139 0.9343872 0.9130249 2.453003l-1.6530151 0.034057617q-0.06448364 -1.0171509 -0.56921387 -1.5880737q-0.5046997 -0.57092285 -1.3257141 -0.67926025q-1.2547607 -0.16555786 -2.1968994 0.6242676q-0.9266968 0.7918396 -1.2545776 2.718628q-0.33309937 1.9576111 0.2738037 2.9517822q0.6095276 0.97875977 1.81781 1.1381836q0.97592163 0.12875366 1.7261963 -0.3711548q0.7529297 -0.5153198 1.1486511 -1.723938zm2.6727295 8.027954l-1.1773071 -0.15533447q3.489441 -4.031311 4.239624 -8.439819q0.2911377 -1.7109985 0.19241333 -3.457672q-0.09185791 -1.4147949 -0.43444824 -2.7523499q-0.21463013 -0.879364 -1.0046997 -2.9378967l1.1772766 0.15533447q1.3441467 2.5256348 1.771698 4.946106q0.37420654 2.0824585 0.001739502 4.2713013q-0.41967773 2.466278 -1.7735596 4.6517334q-1.3357849 2.172058 -2.9927368 3.7185974z" fill-rule="nonzero"></path></g></svg>
+
diff --git a/chapter/3/E_account_spreadsheet_vats.png b/chapter/3/E_account_spreadsheet_vats.png
new file mode 100644
index 0000000..8ce9624
--- /dev/null
+++ b/chapter/3/E_account_spreadsheet_vats.png
Binary files differ
diff --git a/chapter/3/E_vat.png b/chapter/3/E_vat.png
new file mode 100644
index 0000000..131b0de
--- /dev/null
+++ b/chapter/3/E_vat.png
Binary files differ
diff --git a/chapter/3/message-passing.md b/chapter/3/message-passing.md
index 5898e23..a35a75e 100644
--- a/chapter/3/message-passing.md
+++ b/chapter/3/message-passing.md
@@ -1,11 +1,463 @@
---
layout: page
-title: "Message Passing"
-by: "Joe Schmoe and Mary Jane"
+title: "Message Passing and the Actor Model"
+by: "Nathaniel Dempkowski"
---
-Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file message-passing %}
+# Introduction
-## References
+Message passing programming models have essentially been discussed since the beginning of distributed computing and as a result message passing can be taken to mean a lot of things. If you look up a broad definition on Wikipedia, it includes things like Remote Procedure Calls (RPC), and Message Passing Interface (MPI). Additionally, there are popular process-calculi like the pi-calculus and Communicating Sequential Processes (CSP) which have inspired practical message passing systems. For example, Go's channels are based on the idea of first-class communication channels from the pi-calculus and Clojure's `core.async` library is based on CSP. However, when people talk about message passing today they mostly mean the actor model. It is a ubiquitous and general message passing programming model that has been developing since the 1970's and is used today to build massive scalable systems.
-{% bibliography --file message-passing %} \ No newline at end of file
+In the field of message passing programming models, it is not only important to consider recent state of the art research, but additionally the historic initial papers on message passing and the actor model that are the roots of the programming models described in more recent papers. It is enlightening to see which aspects of the models have stuck around, and many of the more recent papers reference and address deficiencies present in older papers. There have been plenty of programing languages designed around message passing, especially those focused on the actor model of programming and organizing units of computation.
+
+In this chapter I describe the four primary variants of the actor model: classic actors, process-based actors, communicating event-loops, and active objects. I attempt to highlight historic and modern languages that exemplify these models, as well as the philosophies and tradeoffs that programmers need to be aware of to understand and best make use of these models.
+
+Despite the actor model's originating as far back as the 1970s, it is still being developed and being incorporated into the programming languages of today, as many recently published papers and systems in the field demonstrate. There are a few robust industrial-strength actor systems that are being used to power massive scalable distributed systems; for example Akka has been used to serve PayPal's billions of transactions, {% cite PayPalAkka --file message-passing %} Erlang has been used to send messages for WhatsApp's hundreds of millions of users, {% cite ErlangWhatsAppTalk --file message-passing %} and Orleans has been used to serve Halo 4's millions of players. {% cite OrleansHalo4Talk --file message-passing %} There are a couple of different approaches to building industrial actor frameworks around monitoring, handling fault-tolerance, and managing actor lifecycles which are detailed later in the chapter.
+
+An important framing for the actor models presented is in the question "Why message passing, and specifically why the actor model?" Given the vast number of distributed programming models out there, one might ask, why this one was so important when it was initially proposed? Why has it facilitated advanced languages, systems, and libraries that are widely used today? As we'll see throughout this chapter, some of the broadest advantages of the actor model include isolation of state managed by the given actor, scalability, and simplifying the programmer's ability to reason about their system.
+
+# Original proposal of the actor model
+
+The actor model was originally proposed in _A Universal Modular ACTOR Formalism for Artificial Intelligence_ {% cite Hewitt:1973:UMA:1624775.1624804 --file message-passing %} in 1973 as a method of computation for artificial intelligence research. The original goal of the model was to model parallel computation in communication in a way that could be safely distributed concurrently across workstations. The paper makes few presumptions about implementation details, instead defining the high-level message passing communication model. Gul Agha developed the model further, by focusing on using actors as a basis for concurrent object-oriented programming. This work is collected in _Actors: A Model of Concurrent Computation in Distributed Systems_. {% cite Agha:1986:AMC:7929 --file message-passing %}
+
+Actors are defined as independent units of computation with isolated state. These units have two core characteristics:
+
+* they can send messages asynchronously to one another, and,
+* they have a mailbox which contains messages that they have received, allowing messages to be received at any time and then queued for processing.
+
+Messages are of the form:
+
+```
+(request: <message-to-target>
+ reply-to: <reference-to-messenger>)
+```
+
+Actors attempt to process messages from their mailboxes by matching their `request` field sequentially against patterns or rules which can be specific values or logical statements. When a pattern is matched, computation occurs and the result of that computation is implicitly returned to the reference in the message's `reply-to` field. This is a type of continuation, where the continuation is the message to another actor. These messages are one-way and, there are no guarantees that a message will ever be received in response. The actor model is so general because it places few restrictions on systems. Asynchrony and the absence of message delivery guarantees enable modeling real distributed systems using the actor model. For example, if message delivery was guaranteed, then the model would be much less general, and only able to model systems which include complex message-delivery protocols. This originally-proposed variant of the actor model is limited compared to many of the others, but the early ideas of taking advantage of distribution of processing power to enable greater parallel computation are there.
+
+Interestingly, the original paper introducing the actor model does so in the context of hardware. They mention actors as almost another machine architecture. This paper describes the concepts of an "actor machine" and a "hardware actor" as the context for the actor model, which is totally different from the way we think about modern actors as abstracting away a lot of the hardware details we don't want to deal with. This concept is reminiscent of something like a Lisp machine, though specially built to utilize the actor model of computation for artificial intelligence.
+
+# Classic actor model
+
+The classic actor model was formalized as a unit of computation in Agha's _Concurrent Object-Oriented Programming_. {% cite Agha:1990:COP:83880.84528 --file message-passing %} The classic actor expands on the original proposal of actors, keeping the ideas of asynchronous communication through messages between isolated units of computation and state. The classic actor contains the following primitive operations:
+
+* `create`: create an actor from a behavior description and a set of parameters, including other existing actors
+* `send`: send a message to another actor
+* `become`: have an actor replace their behavior with a new one
+
+As in the original actor model, classic actors communicate by asynchronous message passing. They are a primitive independent unit of computation which can be used to build higher-level abstractions for concurrent programming. Actors are uniquely addressable, and have their own independent mailboxes or message queues. State changes using the classic actor model are specified and aggregated using the `become` operation. Each time an actor processes a message it computes a behavior in response to the next type of message it expects to process. A `become` operation's argument is a named continuation, `b`, representing behavior that the actor should be updated with, along with some state that should be passed to `b`.
+
+This continuation model is flexible. You could create a purely functional actor where the new behavior would be identical to the original and no state would be passed. An example of this is the `AddOne` actor below, which processes a message according to a single fixed behavior.
+
+```
+(define AddOne
+ [add-one [n]
+ (return (+ n 1))])
+```
+
+The model also enables the creation of stateful actors which change behavior and pass along an object representing some state. This state can be the result of many operations, which enables the aggregation of state changes at a higher level of granularity than something like variable assignment. An example of this is a `BankAccount` actor given in _Concurrent Object-Oriented Programming_. {% cite Agha:1990:COP:83880.84528 --file message-passing %}
+
+```
+(define BankAccount
+ (mutable [balance]
+ [withdraw-from [amount]
+ (become BankAccount (- balance amount))
+ (return 'withdrew amount)]
+ [deposit-to [amount]
+ (become BankAccount (+ balance amount))
+ (return 'deposited amount)]
+ [balance-query
+ (return 'balance-is balance)]))
+```
+
+Stateful continuations enable flexibility in the behavior of an actor over time in response to the actions of other actors in the system. Limiting state and behavior changes to `become` operations changes the level at which one analyzes a system, freeing the programmer from worrying about interference during state changes. In the example above, the programmer only has to worry about changes to the account's balance during `become` statements in response to a sequential queue of well-defined message types.
+
+If you squint a little, this actor definition sounds similar to Alan Kay’s original definition of Object Oriented programming. This definition describes a system where objects have a behavior, their own memory, and communicate by sending and receiving messages that may contain other objects or simply trigger actions. Kay's ideas sound closer to what we consider the actor model today, and less like what we consider object-oriented programming. That is, Kay's focus in this description is on designing the messaging and communications that dictate how objects interact.
+
+<blockquote cite="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">
+<p>The big idea is "messaging" -- that is what the kernal [sic] of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be.</p>
+<footer>Alan Kay {% cite KayQuote --file message-passing %}</footer>
+</blockquote>
+
+## Concurrent Object-Oriented Programming (1990)
+
+One could say that the renaissance of actor models in mainstream program began with Gul Agha's work. His seminal book _Actors: A Model of Concurrent Computation in Distributed Systems_ {% cite Agha:1986:AMC:7929 --file message-passing %} and later paper, _Concurrent Object-Oriented Programming_ {% cite Agha:1990:COP:83880.84528 --file message-passing %}, offer classic actors as a natural solution to solving problems at the intersection of two trends in computing; increased distributed computing resources and the rising popularity of object-oriented programming. The paper defines common patterns of parallelism: pipeline concurrency, divide and conquer, and cooperative problem solving. It then focuses on how the actor model can be used to solve these problems in an object-oriented style, and some of the challenges that arise with distributed actors and objects, as well as strategies and tradeoffs for communication and reasoning about behaviors.
+
+This paper looks at a lot of systems and languages that are implementing solutions in this space, and starts to identify some of the advantages from the perspective of programmers of programming with actors. One of the core languages used for examples in the paper is Rosette {% cite Tomlinson:1988:ROC:67387.67410 --file message-passing %}, but the paper largely focuses on the potential and benefits of the model. Agha claims the benefits of using objects stem from a separation of concerns.
+
+<blockquote>
+<p>By separating the specification of what is done (the abstraction) from how it is done (the implementation), the concept of objects provides modularity necessary for programming in the large. It turns out that concurrency is a natural consequence of the concept of objects.</p>
+<footer>Gul Agha {% cite Agha:1990:COP:83880.84528 --file message-passing %}</footer>
+</blockquote>
+
+Splitting concerns into multiple pieces allows for the programmer to have an easier time reasoning about the behavior of the program. It also allows the programmer to use more flexible abstractions in their programs.
+
+<blockquote>
+<p>It is important to note that the actor languages give special emphasis to developing flexible program structures which simplify reasoning about programs.</p>
+<footer>Gul Agha {% cite Agha:1990:COP:83880.84528 --file message-passing %}</footer>
+</blockquote>
+
+This flexibility turns out to be a highly discussed advantage which continues to be touted in modern actor systems.
+
+## Rosette
+
+Rosette was both a language for concurrent object-oriented programming of actors, as well as a runtime system for managing the usage of and access to resources by those actors. Rosette {% cite Tomlinson:1988:ROC:67387.67410 --file message-passing %} is mentioned throughout Agha's _Concurrent Object-Oriented Programming_, {% cite Agha:1990:COP:83880.84528 --file message-passing %} and the code examples given in the paper are written in Rosette. Agha is even an author on the Rosette paper, so its clear that Rosette is foundational to the classic actor model. It seems to be a language which almost defines what the classic actor model looks like in the context of concurrent object-oriented programming.
+
+The motivation behind Rosette was to provide strategies for dealing with problems like search, where the programmer needs a means to control how resources are allocated to sub-computations to optimize performance in the face of combinatorial explosion. For example in a search problem, you might first compute an initial set of results that you want to further refine. It would be too computationally expensive to exhaustively refine every result, so you want to choose the best ones based on some metric and only proceed with those. Rosette supports the use of concurrency in solving computationally intensive problems whose structure is not statically defined, but rather depends on some heuristic to return results. Rosette has an architecture which uses actors in two distinct ways. They describe two different layers with different responsibilities:
+
+* _Interface layer_: This implements mechanisms for monitoring and control of resources. The system resources and hardware are viewed as actors.
+* _System environment_: This is comprised of actors who actually describe the behavior of concurrent applications and implement resource management policies based on the interface layer.
+
+The Rosette language has a number of object-oriented features, many of which we take for granted in modern object-oriented programming languages. It implements dynamic creation and modification of objects for extensible and reconfigurable systems, supports inheritance, and has objects which can be organized into classes. The more interesting characteristic is that the concurrency in Rosette is inherent and declarative rather than explicit as with many modern object-oriented languages. In Rosette, the concurrency is an inherent property of the program structure and resource allocation. This is different from a language like Java, where all of the concurrency is very explicit. The Java concurrency model is best covered in _Java Concurrency in Practice_, though Java 8 introduces a few new concurrency techniques that the book does not discuss. {% cite Peierls:2005:JCP:1076522 --file message-passing %} The motivation behind this declarative concurrency comes from the heterogeneous nature of distributed concurrent computers. Different computers and architectures have varying concurrency characteristics, and the authors argue that forcing the programmer to tailor their concurrency to the specific machine makes it difficult to re-map a program to another one. This idea of using actors as a more flexible and natural abstraction over concurrency and distribution of resources is an important one which is seen in some form within many actor systems.
+
+Actors in Rosette are organized into three types of classes which describe different aspects of the actors within the system:
+
+* _Abstract classes_ specify requests, responses, and actions within the system which can be observed. The idea behind these is to expose the higher-level behaviors of the system, but tailor the actual actor implementations to the resource constraints of the system.
+* _Representation classes_ specify the resource management characteristics of implementations of abstract classes.
+* _Behavior classes_ specify the actual implementations of actors in given abstract and representation classes.
+
+These classes represent a concrete object-oriented abstraction to organize actors which handles the practical constraints of a distributed system. It represents a step in the direction of handling not just the information flow and behavior of the system, but the underlying hardware and resources. Rosette's model feels like a direct expression of those concerns which are something every actor system in production inevitably ends up addressing.
+
+## Akka
+
+Akka is an effort to bring an industrial-strength actor model to the JVM runtime, which was not explicitly designed to support actors. Akka was developed out of initial efforts of [Scala Actors](#scala-actors) to bring the actor model to the JVM. There are a few notable changes from Scala Actors that make Akka worth mentioning, especially as it is being actively developed while Scala Actors is not. Some important changes are detailed in _On the Integration of the Actor Model in Mainstream Technologies: The Scala Perspective_. {% cite Haller:2012:IAM:2414639.2414641 --file message-passing %}
+
+Akka provides a programming interface with both Java and Scala bindings for actors which looks similar to Scala Actors, but has different semantics in how it processes messages. Akka's `receive` operation defines a global message handler which doesn't block on the receipt of no matching messages, and is instead only triggered when a matching message can be processed. It also will not leave a message in an actor's mailbox if there is no matching pattern to handle the message. The message will simply be discarded and an event will be published to the system. Akka's interface also provides stronger encapsulation to avoid exposing direct references to actors. Akka actors have a limited `ActorRef` interface which only provides methods to send or forward messages to its actor, additionally checks are done to ensure that no direct reference to an instance of an `Actor` subclass is accessible after an actor is created. To some degree this fixes problems in Scala Actors where public methods could be called on actors, breaking many of the guarantees programmers expect from message-passing. This system is not perfect, but in most cases it limits the programmer to simply sending messages to an actor using a limited interface.
+
+The Akka runtime also provides performance advantages over Scala Actors. The runtime uses a single continuation closure for many or all messages an actor processes, and provides methods to change this global continuation. This can be implemented more efficiently on the JVM, as opposed to Scala Actors' continuation model which uses control-flow exceptions which cause additional overhead. Additionally, nonblocking message insert and task schedule operations are used for extra performance.
+
+Akka is the production-ready result of the classic actor model lineage. It is actively developed and actually used to build scalable systems. The production usage of Akka is detailed later in this chapter. Akka has been successful enough that it has been ported to other languages/runtimes. There is an [Akka.NET](http://getakka.net/) project which brings the Akka programming model to .NET and Mono using C# and F#. Akka has even been ported to JavaScript as [Akka.js](https://github.com/unicredit/akka.js/), built on top of [Scala.js](http://www.scala-js.org/).
+
+# Process-based actors
+
+The process-based actor model is essentially an actor modeled as a process that runs from start to completion. This view is broadly similar to the classic actor, but different mechanics exist around managing the lifecycle and behaviors of actors between the models. The first language to explicitly implement this model is Erlang, {% cite Armstrong:2010:ERL:1810891.1810910 --file message-passing %} and they even say in a retrospective that their view of computation is broadly similar to the Agha's classic actor model.
+
+Process-based actors are defined as a computation which runs from start to completion, rather than the classic actor model, which defines an actor almost as a state machine of behaviors and the logic to transition between those. Similar state-machine like behavior transitions are possible through recursion with process-based actors, but programming them feels fundamentally different than using the previously described `become` statement.
+
+These actors use a `receive` primitive to specify messages that an actor can receive during a given state/point in time. `receive` statements have some notion of defining acceptable messages, usually based on patterns, conditionals or types. If a message is matched, corresponding code is evaluated, but otherwise the actor simply blocks until it gets a message that it knows how to handle. The semantics of this `receive` are different than the receive previously described in the section about Akka. Akka's `receive` is explicitly only triggered when an actor gets a message it knows how to handle. Depending on the language implementation `receive` might specify an explicit message type or perform some pattern matching on message values.
+
+An example of these core concepts of a process with a defined lifecycle and use of the `receive` statement to match messages is a simple counter process written in Erlang. {% cite Armstrong:2010:ERL:1810891.1810910 --file message-passing %}
+
+```
+counter(N) ->
+ receive
+ tick ->
+ counter(N+1);
+ {From, read} ->
+ From ! {self(), N},
+ counter(N)
+ end.
+```
+
+This demonstrates the use of `receive` to match on two different values of messages `tick`, which increments the counter, and `{From, read}` where `From` is a process identifier and `read` is a literal. In response to another process sending the message `tick` by doing something like `CounterId ! tick.` the process calls itself with an incremented value which demonstrates a similarity to the `become` statement, but using recursion and an argument value instead of a named behavior continuation and some state. If the counter receives a message of the form `{<processId>, read}` it will then send that process a message with the counter's processId and value, and call itself recursively with the same value.
+
+## Erlang
+
+Erlang's implementation of process-based actors gets to the core of what it means to be a process-based actor. Erlang was the origin of the process-based actor model. The Ericsson company originally developed this model to program large highly-reliable fault-tolerant telecommunications switching systems. Erlang's development started in 1985, but its model of programming is still used today. The motivations of the Erlang model were around four key properties that were needed to program fault-tolerant operations:
+
+* Isolated processes
+* Pure message passing between processes
+* Detection of errors in remote processes
+* The ability to determine what type of error caused a process crash
+
+The Erlang researchers initially believed that shared-memory was preventing fault-tolerance and they saw message-passing of immutable data between processes as the solution to avoiding shared-memory. There was a concern that passing around and copying data would be costly, but the Erlang developers saw fault-tolerance as a more important concern than performance. This model was essentially developed independently from other actor systems and research, especially as its development was started before Agha's classic actor model formalization was even published, but it ends up with a broadly similar view of computation to Agha's classic actor model.
+
+Erlang actors run as lightweight isolated processes. They do not have visibility into one another, and pass around pure messages, which are immutable. These have no dangling pointers or data references between objects, and really enforce the idea of immutable separated data between actors unlike many of the early classic actor implementations in which references to actors and data can be passed around freely.
+
+Erlang implements a blocking `receive` operation as a means of processing messages from a processes' mailbox. They use value matching on message tuples as a means of describing the types of messages a given actor can accept.
+
+Erlang also seeks to build failure into the programming model, as one of the core assumptions of a distributed system is that machines and network connections are going to fail. Erlang provides the ability for processes to monitor one another through two primitives:
+
+* `monitor`: one-way unobtrusive notification of process failure/shutdown
+* `link`: two-way notification of process failure/shutdown allowing for coordinated termination
+
+These primitives can be used to construct complex hierarchies of supervision that can be used to handle failure in isolation, rather than failures impacting your entire system. Supervision hierarchies are notably almost the only scheme for fault-tolerance that exists in the world of actors. Almost every actor system that is used to build distributed systems takes a similar approach, and it seems to work. Erlang's philosophies used to build a reliable fault-tolerant telephone exchange seem to be broadly applicable to the fault-tolerance problems of distributed systems.
+
+An example of a process `monitor` written in Erlang is given below. {% cite Armstrong:2010:ERL:1810891.1810910 --file message-passing %}
+
+```
+on_exit(Pid, F) ->
+ spawn(fun() -> monitor(Pid, F) end).
+
+monitor(Pid, F) ->
+ process_flag(trap_exit, true),
+ link(Pid),
+ receive
+ {‘EXIT’, Pid, Why} ->
+ F(Why)
+end.
+```
+
+This defines two processes: `on_exit` which simply spawns a `monitor` process to call a given function when a given process id exits, and `monitor` which uses `link` to receive a message when the given process id exists, and to call a function with the reason it exited. You could imagine chaining many of these `monitor` and `link` operations together to build processes to monitor one another for failure and perform recovery operations depending on the failure behavior.
+
+It is worth mentioning that Erlang achieves all of this through the Erlang Virtual Machine (BEAM), which runs as a single OS process and OS thread per core. These single OS processes then manage many lightweight Erlang processes. The Erlang VM implements all of the concurrency, monitoring, and garbage collection for Erlang processes within this VM, which almost acts like an operating system itself. This is unlike any other language or actor system described here.
+
+## Scala Actors
+
+Scala Actors is an example of taking and enhancing the Erlang model while bringing it to a new platform. Scala Actors brings lightweight Erlang-style message-passing concurrency to the JVM and integrates it with the heavyweight thread/process concurrency models. {% cite Haller:2009:SAU:1496391.1496422 --file message-passing %} This is stated well in the original paper about Scala Actors as "an impedance mismatch between message-passing concurrency and virtual machines such as the JVM." VMs usually map threads to heavyweight processes, but that a lightweight process abstraction reduces programmer burden and leads to more natural abstractions. The authors claim that “The user experience gained so far indicates that the library makes concurrent programming in a JVM-based system much more accessible than previous techniques.”
+
+The realization of this model depends on efficiently multiplexing actors to threads. This technique was originally developed in Scala actors, and later was adopted by Akka. This integration allows for Actors to invoke methods that block the underlying thread in a way that doesn't prevent actors from making process. This is important to consider in an event-driven system where handlers are executed on a thread pool, because the underlying event-handlers can't block threads without risking thread pool starvation. The end result here is that Scala Actors enabled a new lightweight concurrency primitive on the JVM, with enhancements over Erlang's model. The Erlang model was further enhanced with Scala's pattern-matching capabilities which enable more advanced pattern-matching on messages compared to Erlang's tuple value matching. Scala Actors are of the type `Any => Unit`, which means that they are essentially untyped. They can receive literally any type and match on it with potential side effects. This behavior could be problematic and systems like Cloud Haskell and Akka aim to improve on it. Akka especially directly draws on the work of Scala Actors, and has now become the standard actor framework for Scala programmers.
+
+## Cloud Haskell
+
+Cloud Haskell is an extension of Haskell which essentially implements an enhanced version of the computational message-passing model of Erlang in Haskell. {% cite epstein2011 --file message-passing %} It enhances Erlang's model with advantages from Haskell's model of functional programming in the form of purity, types, and monads. Cloud Haskell enables the use of pure functions for remote computation, which means that these functions are idempotent and can be restarted or run elsewhere in the case of failure without worrying about side-effects or undo mechanisms. This alone isn't so different from Erlang, which operates on immutable data in the context of isolated memory.
+
+One of the largest improvements over Erlang is the introduction of typed channels for sending messages. These provide guarantees to the programmer about the types of messages their actors can handle, which is something Erlang lacks. In Erlang, all you have is dynamic pattern matching based on values patterns, and the hope that the wrong types of message don't get passed around your system. Cloud Haskell processes can also use multiple typed channels to pass messages between actors, rather than Erlang's single untyped channel. Haskell's monadic types make it possible for programmers to use a programming style, where they can ensure that pure and effective code are not mixed. This makes reasoning about where side-effects happen in your system easier. Cloud Haskell has shared memory within an actor process, which is useful for certain applications. This might sound like it could cause problems, but shared-memory structures are forbidden by the type system from being shared across actors. Finally, Cloud Haskell allows for the serialization of function closures, which means that higher-order functions can be distributed across actors. This means that as long as a function and its environment are serializable, they can be spun off as a remote computation and seamlessly continued elsewhere. These improvements over Erlang make Cloud Haskell a notable project in the space of process-based actors. Cloud Haskell is currently supported and also has developed the Cloud Haskell Platform, which aims to provide common functionality needed to build and manage a production actor system using Cloud Haskell.
+
+# Communicating event-loops
+
+The communicating event-loop model was introduced in the E language, {% cite Miller:2005:CSP:1986262.1986274 --file message-passing %} and is one that aims to change the level of granularity at which communication happens within an actor-based system. The previously described actor systems organize communication at the actor level, while the communicating event model puts communication between actors in the context of actions on objects within those actors. The overall messages still reference higher-level actors, but those messages refer to more granular actions within an actor's state.
+
+## E Language
+
+The E language implements a model which is closer to imperative object-oriented programming. Within a single actor-like node of computation called a "vat" many objects are contained. This vat contains not just objects, but a mailbox for all of the objects inside, as well as a call stack for methods on those objects. There is a shared message queue and event-loop that acts as one abstraction barrier for computation across actors. The actual references to objects within a vat are used for addressing communication and computation across actors and operate at a different level of abstraction.
+
+This immediately raises other concerns. When handing out references at a different level of granularity than actor-global, how do you ensure the benefits of isolation that the actor model provides? After all, by referencing objects inside of an actor from many places it sounds like we're just reinventing shared-memory problems. This is answered by two different modes of execution: immediate and eventual calls.
+
+<figure class="main-container">
+ <img src="./E_vat.png" alt="An E vat" />
+ <footer>{% cite Miller:2005:CSP:1986262.1986274 --file message-passing %}</footer>
+</figure>
+
+This diagram shows an E vat, which consists of a heap of objects and a thread of control for executing methods on those objects. The stack and queue represent messages in the two different modes of execution that are used when operating on objects in E. The stack is used for immediate execution, while the queue is used for eventual execution. Immediate calls are processed first, and new immediate calls are added to the top of the stack. Eventual calls are then processed from the queue afterwards. These different modes of message passing are highlighted in communication across vats below.
+
+<figure class="main-container">
+ <img src="./E_account_spreadsheet_vats.png" alt="Communication between E vats" />
+ <footer>{% cite Miller:2005:CSP:1986262.1986274 --file message-passing %}</footer>
+</figure>
+
+From this diagram we can see that local calls among objects within a vat are handled on the immediate stack. Then when a call needs to be made across vats, it is handled on the eventual queue, and delivered to the appropriate object within the vat at some point in the future.
+
+E's reference-states define many of the isolation guarantees around computation that we expect from actors. Two different ways to reference objects are defined:
+
+* _Near reference_: This is a reference only possible between two objects in the same vat. These expose both synchronous immediate-calls and asynchronous eventual-sends.
+* _Eventual reference_: This is a reference which crosses vat boundaries, and only exposes asynchronous eventual-sends, not synchronous immediate-calls.
+
+The difference in semantics between the two types of references means that only objects within the same vat are granted synchronous access to one another. The most an eventual reference can do is asynchronously send and queue a message for processing at some unspecified point in the future. This means that within the execution of a vat, a degree of temporal isolation can be defined between the objects and communications within the vat, and the communications to and from other vats.
+
+This code example ties into the previous diagrams, and demonstrates the two different types reference semantics. {% cite Miller:2005:CSP:1986262.1986274 --file message-passing %}
+
+```
+def makeStatusHolder(var myStatus) {
+ def myListeners := [].diverge()
+
+ def statusHolder {
+ to addListener(newListener) {
+ myListeners.push(newListener)
+ }
+
+ to getStatus() { return myStatus }
+
+ to setStatus(newStatus) {
+ myStatus := newStatus
+ for listener in myListeners {
+ listener <- statusChanged(newStatus)
+ }
+ }
+ }
+ return statusHolder
+}
+```
+
+This creates an object `statusHolder` with methods defined by `to` statements. A method invocation from another vat-local object like `statusHolder.setStatus(123)` causes a message to be synchronously delivered to this object. Other objects can register as event listeners by calling either `statusHolder.addListener()` or `statusHolder <- addListener()` to either synchronously or eventually register as listeners. They will be notified eventually when the value of the `statusHolder` changes. This is done via `<-` which is the eventual-send operator.
+
+The motivation for this referencing model comes from wanting to work at a finer-grained level of references than a traditional actor exposes. The simplest example is that you want to ensure that another actor in your system can read a value, but can't write to it. How do you do that within another actor model? You might imagine creating a read-only variant of an actor which doesn't expose a write message type, or proxies only `read` messages to another actor which supports both `read` and `write` operations. In E because you are handing out object references, you would simply only pass around references to a `read` method, and you don't have to worry about other actors in your system being able to write values. These finer-grained references make reasoning about state guarantees easier because you are no longer exposing references to an entire actor, but instead the granular capabilities of the actor. Finer-grained references also enable partial failures and recoveries within an actor. Individual objects within an actor can fail and be restarted without affecting the health of the entire actor. This is in a way similar to the supervision hierarchies seen in Erlang, and even means that messages to a failed object could be queued for processing while that object is recovering. This is something that could not happen with the same granularity in another actor system, but feels like a natural outcome of object-level references in E.
+
+## AmbientTalk/2
+
+AmbientTalk/2 is a modern revival of the communicating event-loops actor model as a distributed programming language with an emphasis on developing mobile peer-to-peer applications. {% cite Cutsem:2007:AOE:1338443.1338745 --file message-passing %} This idea was originally realized in AmbientTalk/1 {% cite Dedecker:2006:APA:2171327.2171349 --file message-passing %} where actors were modelled as ABCL/1-like active objects {% cite Yonezawa:1986:OCP:960112.28722 --file message-passing %}, but AmbientTalk/2 models actors similarly to E's vats. The authors of AmbientTalk/2 felt limited by not allowing passive objects within an actor to be referenced by other actors, so they chose to go with the more fine-grained approach which allows for remote interactions between and movement of passive objects.
+
+Actors in AmbientTalk/2 are representations of event loops. The message queue is the event queue, messages are events, asynchronous message sends are event notifications, and object methods are the event handlers. The event loop serially processes messages from the queue to avoid race conditions. Local objects within an actor are owned by that actor, which is the only entity allowed to directly execute methods on them. Like E, objects within an actor can communicate using synchronous or asynchronous methods of communication. Again similar to E, objects that are referenced outside of an actor can only be communicated to asynchronously by sending messages. Objects can additionally declare themselves serializable, which means they can be copied and sent to other actors for use as local objects. When this happens, there is no maintained relationship between the original object and its copy.
+
+AmbientTalk/2 uses the event loop model to enforce three essential concurrency control properties:
+
+* _Serial execution_: Events are processed sequentially from an event queue, so the handling of a single event is atomic with respect to other events.
+* _Non-blocking communication_: An event loop doesn't suspend computation to wait for other event loops, instead all communication happens strictly as asynchronous event notifications.
+* _Exclusive state access_: Event handlers (object methods) and their associated state belong to a single event loop, which has access to their mutable state. Mutation of other event loop state is only possible indirectly by passing an event notification asking for mutation to occur.
+
+The end result of all this decoupling and isolation of computation is that it is a natural fit for mobile ad hoc networks. In this domain, connections are volatile with limited range and transient failures. Removing coupling based on time or synchronization is a natural fit for the domain, and the communicating event-loop actor model is a natural model for programming these systems. AmbientTalk/2 provides additional features on top of the communicating event-loop model like service discovery. These enable ad hoc network creation as actors near each other can broadcast their existence and advertise common services that can be used for communication.
+
+AmbientTalk/2 is most notable as a reimagining of the communicating event-loops actor model for a modern use case. This again speaks to the broader advantages of actors and their applicability to solving the problems of distributed systems.
+
+# Active Objects
+
+Active object actors draw a distinction between two different types of objects: active and passive objects. Every active object has a single entry point defining a fixed set of messages that are understood. Passive objects are the objects that are actually sent between actors, and are copied around to guarantee isolation. This enables a separation of concerns between data that relates to actor communication and data that relates to actor state and behavior.
+
+The active object model as initially described in the ABCL/1 language defines objects with a state and three modes:
+
+* `dormant`: Initial state of no computation, simply waiting for a message to activate the behavior of the actor.
+* `active`: A state in which computation is performed that is triggered when a message is received that satisfies the patterns and constraints that the actor has defined it can process.
+* `waiting`: A state of blocked execution, where the actor is active, but waiting until a certain type or pattern of message arrives to continue computation.
+
+## ABCL/1 Language
+
+The ABCL/1 language implements the active object model described above, representing a system as a collection of objects, and the interactions between those objects as concurrent messages being passed around. {% cite Yonezawa:1986:OCP:960112.28722 --file message-passing %} One interesting aspect of ABCL/1 is the idea of explicitly different modes of message passing. Other actor models generally have a notion of priority around the values, types, or patterns of messages they process, usually defined by the ordering of their receive operation, but ABCL/1 implements two different modes of message passing with different semantics. They have standard queued messages in the `ordinary` mode, but more interestingly they have `express` priority messages. When an object receives an express message it halts any other processing of ordinary messages it is performing, and processes the `express` message immediately. This enables an actor to accept high-priority messages while in `active` mode, and also enables monitoring and interrupting actors.
+
+The language also offers different models of synchronization around message-passing between actors. Three different message-passing models are given that enable different use cases:
+
+* `past`: Requests another actor to perform a task, while simultaneously proceeding with computation without waiting for the task to be completed.
+* `now`: Waits for a message to be received, and to receive a response. This acts as a basic synchronization barrier across actors.
+* `future`: Acts like a typical future, continuing computation until a remote result is needed, and then blocking until that result is received.
+
+It is interesting to note that all of these modes can be expressed by the `past` style of message-passing, as long as the type of the message and which actor to reply to with results are known.
+
+The key difference here is around how this different style of actors relates to managing their lifecycle. In the active object style, lifecycle is a result of messages or requests to actors, but in other styles we see a more explicit notion of lifecycle and creating/destroying actors.
+
+## Orleans
+
+Orleans takes the concept of actors whose lifecycle is dependent on messaging or requests and places them in the context of cloud applications. {% cite Bykov:2011:OCC:2038916.2038932 --file message-passing %} Orleans does this via actors (called "grains") which are isolated units of computation and behavior that can have multiple instantiations (called "activations") for scalability. These actors also have persistence, meaning they have a persistent state that is kept in durable storage so that it can be used to manage things like user data.
+
+Orleans uses a different notion of identity than other actor systems. In other systems an "actor" might refer to a behavior and instances of that actor might refer to identities that the actor represents like individual users. In Orleans, an actor represents that persistent identity, and the actual instantiations are in fact reconcilable copies of that identity.
+
+The programmer essentially assumes that a single entity is handling requests to an actor, but the Orleans runtime actually allows for multiple instantiations for scalability. These instantiations are invoked in response to an RPC-like call from the programmer which immediately returns an asynchronous promise.
+
+In Orleans, declaring an actor just looks like making any other class which implements a specific interface. A simple example here is a `PlayerGrain` which can join games. All methods of an Orleans actor (grain) interface must return a `Task<T>`, as they are all asynchronous.
+
+```
+public interface IPlayerGrain : IGrainWithGuidKey
+{
+ Task<IGameGrain> GetCurrentGame();
+ Task JoinGame(IGameGrain game);
+}
+
+public class PlayerGrain : Grain, IPlayerGrain
+{
+ private IGameGrain currentGame
+
+ public Task<IGameGrain> GetCurrentGame()
+ {
+ return Task.FromResult(currentGame);
+ }
+
+ public Task JoinGame(IGameGrain game)
+ {
+ currentGame = game;
+ Console.WriteLine("Player {0} joined game {1}", this.GetPrimaryKey(), game.GetPrimaryKey());
+ return TaskDone.Done;
+ }
+}
+```
+
+Invoking a method on an actor is done like any other asynchronous call, using the `await` keyword in C#. This can be done from either a client or inside another actor (grain). In both cases the call looks almost exactly the same, the only different being clients use `GrainClient.GrainFactory` while actors can use `GrainFactory` directly.
+
+```
+IPlayerGrain player = GrainClient.GrainFactory.GetGrain<IPlayerGrain>(playerId);
+Task joinGameTask = player.JoinGame(currentGame);
+await joinGameTask;
+```
+
+Here a game client gets a reference to a specific player, and has that player join the current game. This code looks like any other asynchronous C# code a developer would be used to writing, but this is really an actor system where the runtime has abstracted away many of the details. The runtime handles all of the actor lifecycle in response to the requests clients and other actors within the system make, as well as persistence of state to long-term storage.
+
+Multiple instances of an actor can be running and modifying the state of that actor at the same time. The immediate question here is how does that actually work? It doesn't intuitively seem like transparently accessing and changing multiple isolated copies of the same state should produce anything but problems when its time to do something with that state.
+
+Orleans solves this problem by providing mechanisms to reconcile conflicting changes. If multiple instances of an actor modify persistent state, they need to be reconciled into a consistent state in some meaningful way. The default here is a last-write-wins strategy, but Orleans also exposes the ability to create fine-grained reconciliation policies, as well as a number of common reconcilable data structures. If an application requires a certain reconciliation algorithm, the developer can implement it using Orleans. These reconciliation mechanisms are built upon Orleans' concept of transactions.
+
+Transactions in Orleans are a way to causally reason about the different instances of actors that are involved in a computation. Because in this model computation happens in response to a single outside request, a given actor's chain of computation via. associated actors always contains a single instantiation of each actor. These causal chain of instantiations is treated as a single transaction. At reconciliation time Orleans uses these transactions, along with current instantiation state to reconcile to a consistent state.
+
+All of this is a longwinded way of saying that Orleans' programmer-centric contributions are that it separates the concerns of running and managing actor lifecycles from the concerns of how data flows throughout your distributed system. It does this is a fault-tolerant way, and for many programming tasks, you likely wouldn't have to worry about scaling and reconciling data in response to requests. It provides the benefits of the actor model through a programming model that attempts to abstract away details that you would otherwise have to worry about when using actors in production.
+
+# Why the actor model?
+
+The actor programming model offers benefits to programmers of distributed systems by allowing for easier programmer reasoning about behavior, providing a lightweight concurrency primitive that naturally scales across many machines, and enabling looser coupling among components of a system allowing for change without service disruption. Actors enable a programmer to easier reason about their behavior because they are at a fundamental level isolated from other actors. When programming an actor, the programmer only has to worry about the behavior of that actor and the messages it can send and receive. This alleviates the need for the programmer to reason about an entire system. Instead the programmer has a fixed set of concerns, meaning they can ensure behavioral correctness in isolation, rather than having to worry about an interaction they hadn’t anticipated occurring. Actors provide a single means of communication (message-passing), meaning that a lot of concerns a programmer has around concurrent modification of data are alleviated. Data is restricted to the data within a single actor and the messages it has been passed, rather than all of the accessible data in the whole system.
+
+Actors are lightweight, meaning that the programmer usually does not have to worry about how many actors they are creating. This is a contrast to other fundamental units of concurrency like threads or processes, which a programmer has to be acutely aware of, as they incur high costs of creation, and quickly run into machine resource and performance limitations.
+
+<blockquote>
+<p>Without a lightweight process abstraction, users are often forced to write parts of concurrent applications in an event-driven style which obscures control flow, and increases the burden on the programmer.</p>
+<footer>Philipp Haller {% cite Haller:2009:SAU:1496391.1496422 --file message-passing %}</footer>
+</blockquote>
+
+Unlike threads and processes, actors can also easily be told to run on other machines as they are functionally isolated. This cannot traditionally be done with threads or processes, as they are unable to be passed over the network to run elsewhere. Messages can be passed over the network, so an actor does not have to care where it is running as long as it can send and receive messages. They are more scalable because of this property, and it means that actors can naturally be distributed across a number of machines to meet the load or availability demands of the system.
+
+Finally, because actors are loosely coupled, only depending on a set of input and output messages to and from other actors, their behavior can be modified and upgraded without changing the entire system. For example, a single actor could be upgraded to use a more performant algorithm to do its work, and as long as it can process the same input and output messages, nothing else in the system has to change. This isolation is a contrast to methods of concurrent programming like remote procedure calls, futures, and promises. These models emphasize a tighter coupling between units of computation, where a process may call a method directly on another process and expect a specific result. This means that both the caller and callee (receiver of the call) need to have knowledge of the code being run, so you lose the ability to upgrade one without impacting the other. This becomes a problem in practice, as it means that as the complexity of your distributed system grows, more and more pieces become linked together.
+
+<blockquote>
+<p>It is important to note that the actor languages give special emphasis to developing flexible program structures which simplify reasoning about programs.</p>
+<footer>Gul Agha {% cite Agha:1990:COP:83880.84528 --file message-passing %}</footer>
+</blockquote>
+
+This is not desirable, as a key characteristic of distributed systems is availability, and the more things are linked together, the more of your system you have to take down or halt to make changes/upgrades. Actors compare favorably to other concurrent programming primitives like threads or remote procedure calls due to their low cost and loosely coupled nature. They are also programmer friendly, and ease the programmer burden of reasoning about a distributed system.
+
+# Modern usage in production
+
+It is important when reviewing models of programming distributed systems not to look just to academia, but to see which of these systems are actually used in industry to build things. This can give us insight into which features of actor systems are actually useful, and the trends that exist throughout these systems.
+
+_On the Integration of the Actor Model in Mainstream Technologies: The Scala Perspective_ {% cite Haller:2012:IAM:2414639.2414641 --file message-passing %} provides some insight into the requirements of an industrial-strength actor implementation on a mainstream platform. These requirements were drawn out of an initial effort with [Scala Actors](#scala-actors) to bring the actor model to mainstream software engineering, as well as lessons learned from the deployment and advancement of production actors in [Akka](#akka).
+
+* _Library-based implementation_: It is not obvious which concurrency abstraction wins in real world cases, and different concurrency models might be used to solve different problems, so implementing a concurrency model as a library enables flexibility in usage.
+* _High-level domain-specific language_: A domain-specific language or something comparable is a requirement to compete with languages that specialize in concurrency, otherwise your abstractions are lacking in idioms and expressiveness.
+* _Event-driven implementation_: Actors need to be lightweight, meaning they cannot be mapped to an entire VM thread or process. For most platforms this means an event-driven model.
+* _High performance_: Most industrial applications that use actors are highly performance sensitive, and high performance enables more graceful scalability.
+* _Flexible remote actors_: Many applications can benefit from remote actors, which can communicate transparently over the network. Flexibility in deployment mechanisms is also very important.
+
+These attributes give us a good basis for analyzing whether an actor system can be successful in production. These are attributes that are necessary, but not sufficient for an actor system to be useful in production.
+
+## Failure handling
+
+One of the most important concepts and reasons people use actor systems in production is their support for failure handling and recovery. The root of this support is the previously mentioned ability for actors to supervise one another, and to have supervisors notified of failures. _Designing Reactive Systems: The Role of Actors in Distributed Architecture_ {% cite ReactiveSystems --file message-passing %} details four well-known recovery steps that a supervising actor may take when they are notified of a problem with one of their workers.
+
+* Ignore the error and let the worker resume processing
+* Restart the worker and reset their state
+* Stop the worker entirely
+* Escalate the problem to the supervisor's supervising actor
+
+Based on this scheme, all actors within a system will have a supervisor, which amounts to a large tree of supervision. At the top of the tree is the actor system itself, which may have a default recovery scheme like simply restarting the actor. An interesting note is that this frees up individual actors from handling their failures. The philosophy around failure shifts to "actors will fail" and that we need other explicit actors and methods for handling failure outside of the business logic of the individual actor.
+
+<figure class="main-container">
+ <img src="./supervision_tree.png" alt="An actor supervision hierarchy tree" />
+ <footer>An actor supervision hierarchy. {% cite ReactiveSystems --file message-passing %}</footer>
+</figure>
+
+Another approach that naturally falls out of supervision heirarchies, is that they can be distributed across machines (nodes) within a cluster of actors for fault tolerance.
+
+<figure class="main-container">
+ <img src="./sentinel_nodes.png" alt="Actor supervision across cluster nodes." />
+ <footer>Actor supervision across cluster nodes. {% cite ReactiveSystems --file message-passing %}</footer>
+</figure>
+
+Critical actors can be monitored across nodes, which means that failures can be detected across nodes within a cluster. This allows for other actors within the cluster to easily react to the entire state of the system, not just the state of their local machine. This is important for a number of problems that arise in distributed systems like load-balancing and data/request partitioning. This also allows naturally allows for some form of recovery from the other machines within a cluster, such as spinning up another node automatically or restarting the failed machine/node.
+
+Flexibility around failure handling is a key advantage of using actors in production systems. Supervision means that worker actors can focus on business logic, and failure-handling actors can focus on managing and recovering those actors. Actors can also be cluster-aware and have a view into the state of the entire distributed system.
+
+## Actors as a framework
+
+One trend that seems common among the actor systems we see in production is extensive environments and tooling. Akka, Erlang, and Orleans are the primary actor systems that see real production use, and the reason for this is that they essentially act as frameworks where many of the common problems of actors are taken care of for you. They offer support for managing and monitoring the deployment of actors as well as patterns or modules to handle problems like fault-tolerance and load balancing which every distributed actor system has to address. This allows the programmer to focus on the problems within their domain, rather than the common problems of monitoring, deployment, and composition.
+
+Akka and Erlang provide modules that you can piece together to build various pieces of functionality into your system. Akka provides a huge number of modules and extensions to configure and monitor a distributed system built using actors. They provide a number of utilities to meet common use-case and deployment scenarios, and these are thoroughly listed and documented. For example Akka includes modules to deal with the following common issues (and more):
+
+* Fault Tolerance via supervision hierarchies
+* Routing to balance load across actors
+* Persistence to save and recover actor state across failures and restarts
+* A testing framework specifically for actors
+* Cluster management to group and distribute actors across physical machines
+
+Additionally they provide support for Akka Extensions, which are a mechanism for adding your own features to Akka. These are powerful enough that some core features of Akka like Typed Actors or Serialization are implemented as Akka Extensions.
+
+Erlang provides the Open Telecom Platform (OTP), which is a framework comprised of a set of modules and standards designed to help build applications. OTP takes the generic patterns and components of Erlang, and provides them as libraries that enable code reuse and best practices when developing new systems. Some examples of OTP libraries are:
+
+* A real-time distributed database
+* An interface to relational databases
+* A monitoring framework for machine resource usage
+* Support for interfacing with other communication protocols like SSH
+* A test framework
+
+Cloud Haskell also provides something analogous to Erlang's OTP called the Cloud Haskell Platform.
+
+Orleans is different from these as it is built from the ground up with a more declarative style and runtime. This does a lot of the work of distributing and scaling actors for you, but it is still definitely a framework which handles a lot of the common problems of distribution so that programmers can focus on building the logic of their system. Orleans takes care of the distribution of actors across machines, as well as creating new actor instances to handle increased load. Additionally, Orleans also deals with reconciliation of consistency issues across actor instantiations, as well as persistence of actor data to durable storage. These are common issues that the other industrial actor frameworks also address in some capacity using modules and extensions.
+
+## Module vs. managed runtime approaches
+
+Based on my research there have been two prevalent approaches to frameworks which are actually used to build production actor systems in industry. These are high-level philosophies about the meta-organization of an actor system. They are the design philosophies that aren't even directly considered when just looking at the base actor programming and execution models. The easiest way to describe these is are as the "module approach" and the "managed runtime approach". A high-level analogy to describe these is that the module approach is similar to manually managing memory, while the managed runtime approach is similar to garbage collection. In the module approach, you care about the lifecycle and physical allocation of actors within your system, while in the managed runtime approach you care more about the reconciliation behavior and flow of persistent state between automatic instantiations of your actors.
+
+Both Akka and Erlang take a module approach to building their actor systems. This means that when you build a system using these languages/frameworks, you are using smaller composable components as pieces of the larger system you want to build. You are explicitly dealing with the lifecycles and instantiations of actors within your system, where to distribute them across physical machines, and how to balance actors to scale. Some of these problems might be handled by libraries, but at some level you are specifying how all of the organization of your actors is happening. The JVM or Erlang VM isn't doing it for you.
+
+Orleans goes in another direction, which I call the managed runtime approach. Instead of providing small components which let you build your own abstractions, they provide a runtime in the cloud that attempts to abstract away a lot of the details of managing actors. It does this to such an extent that you no longer even directly manage actor lifecycles, where they live on machines, or how they are replicated and scaled. Instead you program with actors in a more declarative style. You never explicitly instantiate actors, instead you assume that the runtime will figure it out for you in response to requests to your system. You program in strategies to deal with problems like domain-specific reconciliation of data across instances, but you generally leave it to the runtime to scale and distribute the actor instances within your system.
+
+Both approaches have been successful in industry. Erlang has the famous use case of a telephone exchange and a successful history since then. Akka has an entire page detailing its usage in giant companies. Orleans has been used as a backend to massive Microsoft-scale games and applications with millions of users. It seems like the module approach is more popular, but there's only really one example of the managed runtime approach out there. There's no equivalent to Orleans on the JVM or Erlang VM, so realistically it doesn't have as much exposure in the distributed programming community.
+
+## Comparison to Communicating Sequential Processes
+
+One popular model of message-passing concurrency that has been getting attention is Communicating Sequential Processes (CSP). The basic idea behind CSP is that concurrent communication between processes is done by passing messages through channels. Arguably the most popular modern implementation of this is Go's channels. A lot of the surface-level discussions of actors simply take them as something that is a lightweight concurrency primitive which passes messages. This zoomed-out view might conflate CSP-style channels and actors, but it misses a lot of subtleties as CSP really can't be considered an actor framework. The core difference is that CSP implements some form of synchronous messaging between processes, while the actor model entirely decouples messaging between a sender and a receiver. Actors are much more independent, meaning its easier to run them in a distributed environment without changing their semantics. Additionally, receiver failures don't affect senders in the actor model. Actors are a more loosely-coupled abstraction across a distributed environment, while CSP embraces tight-coupling as a means of synchronization across processes. To conflate the two misses the point of both, as actors are operating at a fundamentally different level of abstraction from CSP.
+
+# References
+
+{% bibliography --file message-passing %}
diff --git a/chapter/3/sentinel_nodes.png b/chapter/3/sentinel_nodes.png
new file mode 100644
index 0000000..21e8bd1
--- /dev/null
+++ b/chapter/3/sentinel_nodes.png
Binary files differ
diff --git a/chapter/3/supervision_tree.png b/chapter/3/supervision_tree.png
new file mode 100644
index 0000000..95bc84b
--- /dev/null
+++ b/chapter/3/supervision_tree.png
Binary files differ
diff --git a/chapter/6/acidic-to-basic-how-the-database-ph-has-changed.md b/chapter/6/acidic-to-basic-how-the-database-ph-has-changed.md
new file mode 100644
index 0000000..ffc94c0
--- /dev/null
+++ b/chapter/6/acidic-to-basic-how-the-database-ph-has-changed.md
@@ -0,0 +1,182 @@
+---
+layout: page
+title: "ACIDic to BASEic: How the database pH has changed"
+by: "Aviral Goel"
+---
+
+## 1. The **ACID**ic Database Systems
+
+Relational Database Management Systems are the most ubiquitous database systems for persisting state. Their properties are defined in terms of transactions on their data. A database transaction can be either a single operation or a sequence of operations, but is treated as a single logical operation on the data by the database. The properties of these transactions provide certain guarantees to the application developer. The acronym **ACID** was coined by Andreas Reuter and Theo Härder in 1983 to describe them.
+
+* **Atomicity** guarantees that any transaction will either complete or leave the database unchanged. If any operation of the transaction fails, the entire transaction fails. Thus, a transaction is perceived as an atomic operation on the database. This property is guaranteed even during power failures, system crashes and other erroneous situations.
+
+* **Consistency** guarantees that any transaction will always result in a valid database state, i.e., the transaction preserves all database rules, such as unique keys.
+
+* **Isolation** guarantees that concurrent transactions do not interfere with each other. No transaction views the effects of other transactions prematurely. In other words, they execute on the database as if they were invoked serially (though a read and write can still be executed in parallel).
+
+* **Durability** guarantees that upon the completion of a transaction, the effects are applied permanently on the database and cannot be undone. They remain visible even in the event of power failures or crashes. This is done by ensuring that the changes are committed to disk (non-volatile memory).
+
+<blockquote><p><b>ACID</b>ity implies that if a transaction is complete, the database state is structurally consistent (adhering to the rules of the schema) and stored on disk to prevent any loss.</p></blockquote>
+
+Because of the strong guarantees this model simplifies the life of the developer and has been traditionally the go to approach in application development. It is instructive to examine how these properties are enforced.
+
+Single node databases can simply rely upon locking to ensure *ACID*ity. Each transaction marks the data it operates upon, thus enabling the database to block other concurrent transactions from modifying the same data. The lock has to be acquired both while reading and writing data. The locking mechanism enforces a strict linearizable consistency, i.e., all transactions are performed in a particular sequence and invariants are always maintained by them. An alternative, *multiversioning* allows a read and write operation to execute in parallel. Each transaction which reads data from the database is provided the earlier unmodified version of the data that is being modified by a write operation. This means that read operations don't have to acquire locks on the database. This enables read operations to execute without blocking write operations and write operations to execute without blocking read operations.
+
+This model works well on a single node. But it exposes a serious limitation when too many concurrent transactions are performed. A single node database server will only be able to process so many concurrent read operations. The situation worsens when many concurrent write operations are performed. To guarantee *ACID*ity, the write operations will be performed in sequence. The last write request will have to wait for an arbitrary amount of time, a totally unacceptable situation for many real time systems. This requires the application developer to decide on a **Scaling** strategy.
+
+### 1.2. Scaling transaction volume
+
+To increase the volume of transactions against a database, two scaling strategies can be considered
+
+* **Vertical Scaling** is the easiest approach to scale a relational database. The database is simply moved to a larger computer which provides more transactional capacity. Unfortunately, its far too easy to outgrow the capacity of the largest system available and it is costly to purchase a bigger system each time that happens. Since its specialized hardware, vendor lock-in will add to further costs.
+
+* **Horizontal Scaling** is a more viable option and can be implemented in two ways. Data can be segregated into functional groups spread across databases. This is called *Functional Scaling*. Data within a functional group can be further split across multiple databases, enabling functional areas to be scaled independently of one another for even more transactional capacity. This is called *sharding*.
+
+Horizontal Scaling through functional partitioning enables high degree of scalability. However, the functionally separate tables employ constraints such as foreign keys. For these constraints to be enforced by the database itself, all tables have to reside on a single database server. This limits horizontal scaling. To work around this limitation the tables in a functional group have to be stored on different database servers. But now, a single database server can no longer enforce constraints between the tables. In order to ensure *ACID*ity of distributed transactions, distributed databases employ a two-phase commit (2PC) protocol.
+
+* In the first phase, a coordinator node interrogates all other nodes to ensure that a commit is possible. If all databases agree then the next phase begins, else the transaction is canceled.
+
+* In the second phase, the coordinator asks each database to commit the data.
+
+2PC is a blocking protocol and updates can take from a few milliseconds up to a few minutes to commit. This means that while a transaction is being processed, other transactions will be blocked. So the application that initiated the transaction will be blocked. Another option is to handle the consistency across databases at the application level. This only complicates the situation for the application developer who is likely to implement a similar strategy if *ACID*ity is to be maintained.
+
+## 2. The Distributed Concoction
+
+A distributed application is expected to have the following three desirable properties:
+
+1. **Consistency** - This is the guarantee of total ordering of all operations on a data object such that each operation appears indivisible. This means that any read operation must return the most recently written value. This provides a very convenient invariant to the client application. This definition of consistency is the same as the **Atomic**ity guarantee provided by relational database transactions.
+
+2. **Availability** - Every request to a distributed system must result in a response. However, this is too vague a definition. Whether a node failed in the process of responding or it ran a really long computation to generate a response or whether the request or the response got lost due to network issues is generally impossible to determine by the client and willHence, for all practical purposes, availability can be defined as the service responding to a request in a timely fashion, the amount of delay an application can bear depends on the application domain.
+
+3. **Partition Tolerance** - Partitioning is the loss of messages between the nodes of a distributed system. During a network partition, the system can lose arbitrary number of messages between nodes. A partition tolerant system will always respond correctly unless a total network failure happens.
+
+Consistency requirement implies that every request will be treated atomically by the system even if the nodes lose messages due to network partitions.
+Availability requirement implies that every request should receive a response even if a partition causes messages to be lost arbitrarily.
+
+## 3. The CAP Theorem
+
+![Partitioned Network](resources/partitioned-network.jpg)
+
+In the network above, all messages between the node set M and N are lost due to a network issue. The system as a whole detects this situation. There are two options -
+
+1. **Availability first** - The system allows any application to read and write to data objects on these nodes independently even though they are not able to communicate. The application writes to a data object on node M. Due to **network partition**, this change is not propagated to replicas of the data object in N. Subsequently, the application tries to read the value of that data object and the read operation executes in one of the nodes of N. The read operation returns the older value of the data object, thus making the application state not **consistent**.
+
+2. **Consistency first** - The system does not allow any application to write to data objects as it cannot ensure **consistency** of replica states. This means that the system is perceived to be **unavailable** by the applications.
+
+If there are no partitions, clearly both consistency and availability can be guaranteed by the system. This observation led Eric Brewer to conjecture in an invited talk at PODC 2000-
+
+<blockquote>It is impossible for a web service to provide the following three guarantees:
+Consistency
+Availability
+Partition Tolerance</blockquote>
+
+This is called the CAP theorem.
+
+It is clear that the prime culprit here is network partition. If there are no network partitions, any distributed service will be both highly available and provide strong consistency of shared data objects. Unfortunately, network partitions cannot be remedied in a distributed system.
+
+## 4. Two of Three - Exploring the CAP Theorem
+
+The CAP theorem dictates that the three desirable properties, consistency, availability and partition tolerance cannot be offered simultaneously. Let's study if its possible to achieve two of these three properties.
+
+### Consistency and Availability
+If there are no network partitions, then there is no loss of messages and all requests receive a response within the stipulated time. It is clearly possible to achieve both consistency and availability. Distributed systems over intranet are an example of such systems.
+
+### Consistency and Partition Tolerance
+Without availability, both of these properties can be achieved easily. A centralized system can provide these guarantees. The state of the application is maintained on a single designated node. All updates from the client are forwarded by the nodes to this designated node. It updates the state and sends the response. When a failure happens, then the system does not respond and is perceived as unavailable by the client. Distributed locking algorithms in databases also provide these guarantees.
+
+### Availability and Partition Tolerance
+Without atomic consistency, it is very easy to achieve availability even in the face of partitions. Even if nodes fail to communicate with each other, they can individually handle query and update requests issued by the client. The same data object will have different states on different nodes as the nodes progress independently. This weak consistency model is exhibited by web caches.
+
+Its clear that two of these three properties are easy to achieve in any distributed system. Since large scale distributed systems have to take partitions into account, will they have to sacrifice availability for consistency or consistency for availability? Clearly giving up either consistency or availability is too big a sacrifice.
+
+## 5. The **BASE**ic distributed state
+
+When viewed through the lens of CAP theorem and its consequences on distributed application design, we realize that we cannot commit to perfect availability and strong consistency. But surely we can explore the middle ground. We can guarantee availability most of the time with occasional inconsistent view of the data. The consistency is eventually achieved when the communication between the nodes resumes. This leads to the following properties of the current distributed applications, referred to by the acronym BASE.
+
+* **Basically Available** services are those which are partially available when partitions happen. Thus, they appear to work most of the time. Partial failures result in the system being unavailable only for a section of the users.
+
+* **Soft State** services provide no strong consistency guarantees. They are not write consistent. Since replicas may not be mutually consistent, applications have to accept stale data.
+
+* **Eventually Consistent** services try to make application state consistent whenever possible.
+
+## 6. Partitions and latency
+Any large scale distributed system has to deal with latency issue. In fact, network partitions and latency are fundamentally related. Once a request is made and no response is received within some duration, the sender node has to assume that a partition has happened. The sender node can take one of the following steps:
+
+1) Cancel the operation as a whole. In doing so, the system is choosing consistency over availability.
+2) Proceed with the rest of the operation. This can lead to inconsistency but makes the system highly available.
+3) Retry the operation until it succeeds. This means that the system is trying to ensure consistency and reducing availability.
+
+Essentially, a partition is an upper bound on the time spent waiting for a response. Whenever this upper bound is exceeded, the system chooses C over A or A over C. Also, the partition may be perceived only by two nodes of a system as opposed to all of them. This means that partitions are a local occurrence.
+
+## 7. Handling Partitions
+Once a partition has happened, it has to be handled explicitly. The designer has to decide which operations will be functional during partitions. The partitioned nodes will continue their attempts at communication. When the nodes are able to establish communication, the system has to take steps to recover from the partitions.
+
+### 7.1. Partition mode functionality
+When at least one side of the system has entered into partition mode, the system has to decide which functionality to support. Deciding this depends on the invariants that the system must maintain. Depending on the nature of problem, the designer may choose to compromise on certain invariants by allowing partitioned system to provide functionality which might violate them. This means the designer is choosing availability over consistency. Certain invariants may have to be maintained and operations that will violate them will either have to be modified or prohibited. This means the designer is choosing consistency over availability.
+Deciding which operations to prohibit, modify or delay also depends on other factors such as the node. If the data is stored on the same node, then operations on that data can typically proceed on that node but not on other node.
+In any event, the bottomline is that if the designer wishes for the system to be available, certain operations have to be allowed. The node has to maintain a history of these operations so that it can be merged with the rest of the system when it is able to reconnect.
+Since the operations can happen simultaneously on multiple disconnected nodes, all sides will maintain this history. One way to maintain this information is through version vectors.
+Another interesting problem is to communicate the progress of these operations to the user. Until the system gets out of partition mode, the operations cannot be committed completely. Till then, the user interface has to faithfully represent their incomplete or in-progress status to the user.
+
+### 7.2. Partition Recovery
+When the partitioned nodes are able to communicate, they have to exchange information to maintain consistency. Both sides continued in their independent direction but now the delayed operations on either side have to be performed and violated invariants have to be fixed. Given the state and history of both sides, the system has to accomplish the following tasks.
+
+#### 7.2.1. Consistency
+During recovery, the system has to reconcile the inconsistency in state of both nodes. This is relatively straightforward to accomplish. One approach is to start from the state at the time of partition and apply operations of both sides in an appropriate manner, ensuring that the invariants are maintained. Depending on operations allowed during the partition phase, this process may or may not be possible. The general problem of conflict resolution is not solvable but a restricted set of operations may ensure that the system can always always merge conflicts. For example, Google Docs limits operations to style and text editing. But source-code control systems such as Concurrent Versioning System (CVS) may encounter conflict which require manual resolution. Research has been done on techniques for automatic state convergence. Using commutative operations allows the system to sort the operations in a consistent global order and execute them. Though all operations can't be commutative, for example - addition with bounds checking is not commutative. Mark Shapiro and his colleagues at INRIA have developed *commutative replicated data types (CRDTs)* that provably converge as operations are performed. By implementing state through CRDTs, we can ensure Availability and automatic state convergence after partitions.
+
+#### 7.2.2. Compensation
+During partition, its possible for both sides to perform a series of actions which are externalized, i.e. their effects are visible outside the system. To compensate for these actions, the partitioned nodes have to maintain a history.
+For example, consider a system in which both sides have executed the same order during a partition. During the recovery phase, the system has to detect this and distinguish it from two intentional orders. Once detected, the duplicate order has to be rolled back. If the order has been committed successfully then the problem has been externalized. The user will see twice the amount deducted from his account for a single purchase. Now, the system has to credit the appropriate amount to the user's account and possibly send an email explaining the entire debacle. All this depends on the system maintaining the history during partition. If the history is not present, then duplicate orders cannot be detected and the user will have to catch the mistake and ask for compensation.
+It would have been great if the duplicate order was not issued by the system in the first place. But the requirement to maintain system availability trumps consistency. Mistakes in such cases cannot always be corrected internally. But by admitting them and compensating for them, the system arguably exhibits equivalent behavior.
+
+### 8. What's the right pH for my distributed solution?
+
+Whether an application chooses to be an *ACID*ic or *BASE*ic service depends on the domain. An application developer has to consider the consistency-availability tradeoff on a case by case basis. *ACID*ic databases provide a very simple and strong consistency model making application development easy for domains where data inconsistency cannot be tolerated. *BASE*ic systems provide a very loose consistency model, placing more burden on the application developer to understand the invariants and manage them carefully during partitions by appropriately limiting or modifying the operations.
+
+## 9. References
+
+https://neo4j.com/blog/acid-vs-base-consistency-models-explained/
+https://en.wikipedia.org/wiki/Eventual_consistency/
+https://en.wikipedia.org/wiki/Distributed_transaction
+https://en.wikipedia.org/wiki/Distributed_database
+https://en.wikipedia.org/wiki/ACID
+http://searchstorage.techtarget.com/definition/data-availability
+https://aphyr.com/posts/288-the-network-is-reliable
+http://research.microsoft.com/en-us/um/people/navendu/papers/sigcomm11netwiser.pdf
+http://web.archive.org/web/20140327023856/http://voltdb.com/clarifications-cap-theorem-and-data-related-errors/
+http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf
+http://www.hpl.hp.com/techreports/2012/HPL-2012-101.pdf
+http://research.microsoft.com/en-us/um/people/navendu/papers/sigcomm11netwiser.pdf
+http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf
+http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf
+https://people.mpi-sws.org/~druschel/courses/ds/papers/cooper-pnuts.pdf
+http://blog.gigaspaces.com/nocap-part-ii-availability-and-partition-tolerance/
+http://stackoverflow.com/questions/39664619/what-if-we-partition-a-ca-distributed-system
+https://people.eecs.berkeley.edu/~istoica/classes/cs268/06/notes/20-BFTx2.pdf
+http://ivoroshilin.com/2012/12/13/brewers-cap-theorem-explained-base-versus-acid/
+https://www.quora.com/What-is-the-difference-between-CAP-and-BASE-and-how-are-they-related-with-each-other
+http://berb.github.io/diploma-thesis/original/061_challenge.html
+http://dssresources.com/faq/index.php?action=artikel&id=281
+https://saipraveenblog.wordpress.com/2015/12/25/cap-theorem-for-distributed-systems-explained/
+https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed
+https://dzone.com/articles/better-explaining-cap-theorem
+http://www.julianbrowne.com/article/viewer/brewers-cap-theorem
+http://delivery.acm.org/10.1145/1400000/1394128/p48-pritchett.pdf?ip=73.69.60.168&id=1394128&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=694281010&CFTOKEN=94478194&__acm__=1479326744_f7b98c8bf4e23bdfe8f17b43e4f14231
+http://dl.acm.org/citation.cfm?doid=1394127.1394128
+https://en.wikipedia.org/wiki/Eventual_consistency
+https://en.wikipedia.org/wiki/Two-phase_commit_protocol
+https://en.wikipedia.org/wiki/ACID
+https://people.eecs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf
+http://www.johndcook.com/blog/2009/07/06/brewer-cap-theorem-base/
+http://searchsqlserver.techtarget.com/definition/ACID
+http://queue.acm.org/detail.cfm?id=1394128
+http://www.dataversity.net/acid-vs-base-the-shifting-ph-of-database-transaction-processing/
+https://neo4j.com/developer/graph-db-vs-nosql/#_navigate_document_stores_with_graph_databases
+https://neo4j.com/blog/aggregate-stores-tour/
+https://en.wikipedia.org/wiki/Eventual_consistency
+https://en.wikipedia.org/wiki/Distributed_transaction
+https://en.wikipedia.org/wiki/Distributed_database
+https://en.wikipedia.org/wiki/ACID
+http://searchstorage.techtarget.com/definition/data-availability
+https://datatechnologytoday.wordpress.com/2013/06/24/defining-database-availability/
+{% bibliography --file rpc %}
diff --git a/chapter/6/being-consistent.md b/chapter/6/being-consistent.md
new file mode 100644
index 0000000..233d987
--- /dev/null
+++ b/chapter/6/being-consistent.md
@@ -0,0 +1,82 @@
+---
+layout: page
+title: "Being Consistent"
+by: "Aviral Goel"
+---
+
+## Replication and Consistency
+Availability and Consistency are the defining characteristics of any distributed system. As dictated by the CAP theorem, accommodating network partitions requires a trade off between the two properties. Modern day large scale internet based distributed systems have to be highly available. To manage huge volumes of data (big data) and to reduce access latency for geographically diverse user base, their data centers also have to be geographically spread out. Network partitions which would otherwise happen with a low probability on a local network become certain events in such systems. To ensure availability in the event of partitions, these systems have to replicate data objects. This begs the question, how to ensure consistency of these replicas? It turns out there are different notions of consistency which the system can adhere to.
+
+* **Strong Consistency** implies linearizability of updates, i.e., all updates applied to a replicated data type are serialized in a global total order. This means that any update will have to be simultaneously applied to all other replicas. Its obvious that this notion of consistency is too restrictive. A single unavailable node will violate this condition. Forcing all updates to happen synchronously will impact system availability negatively. This notion clearly does not fit the requirements of highly available fault tolerant systems.
+
+* **Eventual Consistency** is a weaker model of consistency that does not guarantee immediate consistency of all replicas. Any local update is immediately executed on the replica. The replica then sends its state asynchronously to other replicas. As long as all replicas share their states with each other, the system eventually achieves stability. Each replica finally contains the same value. During the execution, all updates happen asynchronously at all replicas in a non-deterministic order. So replicas can be inconsistent between updates. If updates arrive concurrently at a replica, a consensus protocol can be employed to ensure that both updates taken together do not violate an invariant. If they do, a rollback has to be performed and the new state is communicated to all the other replicas.
+
+Most large scale distributed systems try to be **Eventually Consistent** to ensure high availability and partition-tolerance. But conflict resolution is hard. There is little guidance on correct approaches to consensus and its easy to come up with an error prone ad-hoc approach. What if we side-step conflict resolution and rollback completely? Is there a way to design data structures which do not require any consensus protocols to merge concurrent updates?
+
+## A Distributed Setting
+
+### TODO need to write pseudocode. Will finish this part with the detailed explanation of CRDTs in the next chapter.
+Consider a replicated counter. Each node can increment the value of its local copy. The figure below shows three nodes which increment their local copies at arbitrary time points and each replica sends its value asynchronously to the other two replicas. Whenever it recieves the value of its replica, it adds it to its current value. If two values are received concurrently, both will be added together to its current value. So merging replicas in this example becomes trivial.
+
+Let's take a look at another interesting generalization of this. Integer Vector
+
+
+We can make an interesting observation from the previous examples:
+
+__*All distributed data structures don't need conflict resolution*__
+
+This raises the following question:
+
+__*How can we design a distributed structure such that we don't need conflict resolution?*__
+
+The answer to this question lies in an algebraic structure called the **join semilattice**.
+
+## Join Semilattice
+A join-semilattice or upper semilattice is a *partial order* `≤` with a *least upper bound* (LUB) `⊔` for all pairs.
+`m = x ⊔ y` is a Least Upper Bound of `{` `x` `,` `y` `}` under `≤` iff `∀m′, x ≤ m′ ∧ y ≤ m′ ⇒ x ≤ m ∧ y ≤ m ∧ m ≤ m′`.
+
+`⊔` is:
+
+**Associative**
+
+`(x ⊔ y) ⊔ z = x ⊔ (y ⊔ z)`
+
+**Commutative**
+
+`x ⊔ y = y ⊔ x`
+
+**Idempotent**
+
+`x ⊔ x = x`
+
+The examples we saw earlier were of structures that could be modeled as join semilattices. The merge operation for the increment only counter is the summation function and for the integer vector it is the per-index maximum of the vectors being merged.
+So, if we can model the state of the data structure as a partially ordered set and design the merge operation to always compute the "larger" of the two states, its replicas will never need consensus. They will always converge as execution proceeds. Such data structures are called CRDTs (Conflict-free Replicated Data Type). But what about consistency of these replicas?
+
+## Strong Eventual Consistency (SEC)
+We discussed a notion of consistency, *Eventual Consistency*, in which replicas eventually become consistent if there are no more updates to be merged. But the update operation is left unspecified. Its possible for an update to render the replica in a state that causes it to conflict with a later update. In this case the replica may have to roll back and use consensus to ensure that all replicas do the same to ensure consistency. This is complicated and wasteful. But if replicas are modeled as CRDTs, the updates never conflict. Regardless of the order in which the updates are applied, all replicas will eventually have equivalent state. Note that no conflict arbitration is necessary. This kind of Eventual Consistency is a stronger notion of consistency than the one that requires conflict arbitration and hence is called *Strong Eventual Consistency*.
+
+### Strong Eventual Consistency and CAP Theorem
+
+Let's study SEC data objects from the perspective of CAP theorem.
+
+#### 1. Consistency and Network Partition
+Each distributed replica will communicate asynchronously with other reachable replicas. These replicas will eventually converge to the same value. There is no consistency guarantee on the value of replicas not reachable due to network conditions and hence this condition is strictly weaker than strong consistency. But as soon as those replicas can be reached, they will also converge in a self-stabilizing manner.
+
+#### 2. Availability and Network Partition
+Each distributed replica will always be available for local reads and writes regardless of network partitions. In fact, if there are n replicas, a single replica will function even if the remaining n - 1 replicas crash simultaneously. This **provides an extreme form of availability**.
+
+SEC facilitates maximum consistency and availability in the event of network partitions by relaxing the requirement of global consistency. Note that this is achieved by virtue of modeling the data objects as join semilattices.
+
+#### Strong Eventual Consistency and Linearizability
+In a distributed setting, a replica has to handle concurrent updates. In addition to its sequential behavior, a CRDT also has to ensure that its concurrent behavior also ensures strong eventual consistency. This makes it possible for CRDTs to exhibit behavior that is simply not possible for sequentially consistent objects.
+Consider a set CRDT used in a distributed setting. One of the replicas p<sub>i</sub> executes the sequence `add(a); remove(b)`. Another replica p<sub>j</sub> executes the sequence `add(b); remove(a)`. Now both send their states asynchronously to another replica p<sub>k</sub> which has to merge them concurrently. Same element exists in one of the sets and does not exist in the other set. There are multiple choices that the CRDT designer can make. Let's assume that the implementation always prefers inclusion over exclusion. So in this case, p<sub>k</sub> will include both `a` and `b`.
+Now consider a sequential execution of the two sequences on set data structure. The order of execution will be either `add(a); remove(b); add(b); remove(a)` or `add(b); remove(a); add(a); remove(b)`. In both cases one of the elements is excluded. This is different from the state of the CRDT set implementation.
+Thus, strong eventually consistent data structures can be sequentially inconsistent.
+Similarly, if there are `n` sequentially consistent replicas, then they would need consensus to ensure a single order of execution of operations across all replicas. But if `n - 1` replicas crash, then consensus cannot happen. This makes the idea of sequential consistency incomparable to that of strong eventual consistency.
+
+## What Next?
+This chapter introduced Strong Eventual Consistency and the formalism behind CRDTs, join semilattices, which enables CRDTs to exhibit strong eventual consistency. The discussion however does not answer an important question:
+
+__*Can all standard data structures be designed as CRDTs?*__
+
+The next chapter sheds more light on the design of CRDTs and attempts to answer this question.
diff --git a/chapter/6/consistency-crdts.md b/chapter/6/consistency-crdts.md
deleted file mode 100644
index fcb49e7..0000000
--- a/chapter/6/consistency-crdts.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-layout: page
-title: "Consistency & CRDTs"
-by: "Joe Schmoe and Mary Jane"
----
-
-Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file consistency-crdts %}
-
-## References
-
-{% bibliography --file consistency-crdts %} \ No newline at end of file
diff --git a/chapter/6/resources/partitioned-network.jpg b/chapter/6/resources/partitioned-network.jpg
new file mode 100644
index 0000000..513fc13
--- /dev/null
+++ b/chapter/6/resources/partitioned-network.jpg
Binary files differ
diff --git a/chapter/7/langs-consistency.md b/chapter/7/langs-consistency.md
index 3ac6ceb..b19ba23 100644
--- a/chapter/7/langs-consistency.md
+++ b/chapter/7/langs-consistency.md
@@ -1,11 +1,628 @@
---
layout: page
-title: "Languages for Consistency"
-by: "Joe Schmoe and Mary Jane"
+title: "Formal, Yet Relaxed: Models for Consistency"
+by: "James Larisch"
---
+# Formal, Yet Relaxed: Models for Consistency
-Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. {% cite Uniqueness --file langs-consistency %}
+## What's the problem?
+ In many ways, web developers deal with distributed systems problems every day: your client and your server are in two different geographical locations, and thus, some coordination between computers is required.
+
+ As Aviral discussed in the previous section, many computer scientists have done a lot of thinking about the nature of distributed systems problems. As such, we realize that it's impossible to completely emulate the behavior of a single computational machine using multiple machines. For example, the network is simply not as reliable as, say, memory - and waiting for responses can result in untimeliness for the application's userbase. After discussing the Consistency/Availability/Partition-tolerance theorem, Section 6 discussed how we can make drill down into the CAP pyramid and choose the necessary and unnecessary properties of our systems. As stated, we can't perfectly emulate a single computer using multiple machines, but once we accept that fact and learn to work with it, there are plenty of things we *can* do!
+
+## The Shopping Cart
+ Let's bring all these theorem talk back to reality. Let's say you're working at a new e-commerce startup, and you'd like to revolutionize the electronic shopping cart. You'd like to give the customer the ability to do the following:
+ 1. Log in to the site and add a candle to the cart while traveling in Beijing.
+ 1. Take a HyperLoop (3 hours) from Beijing to Los Angeles.
+ 1. Log back in, remove the candle from the cart, and add a skateboard.
+ 1. Take another HyperLoop train from Los Angeles to Paris (5 hours).
+ 1. Log back into the site, add another skateboard, and checkout.
+
+Let's assume you have a server in every single country, and customers connect to the geographically closest server.
+
+How can we ensure that the client sees the same cart at every point in her trip?
+
+If you only had one user of your website, this wouldn't be too hard. You could manually, constantly modify and check on all of your servers and personally make sure the state of the customer's shopping cart is consistent across every single server. But what happens when you have millions of customers and thus millions of shopping carts? That would be impossible to keep track of personally. Luckily, you're a programmer - this can be automated! You simply need to make sure that all of your computers stay in-sync, so if the customer checks her cart in Beijing, then in Paris, she sees the same thing.
+
+But as Section 6 has already explained, this is not so trivial. Messages between your servers in Beijing and Paris could be dropped, corrupted, reordered, duplicated, or delayed. Servers can crash. Sharks can cut the network cables between countries. Since you have no guarantees about when you'll be able to synchronize state between two servers, it's possible that the customer could see two different cart-states depending on which country she's in (which server she asks).
+
+It's possible to implement "consensus" protocols such as Paxos and 3-Phase-Commit that provide coordination between your machines. When failure happens, such as a network shark-attack, the protocol detects a lack of consistency and becomes *unavailable* - at least until it is consistent once more. For applications in which inconsistent state is dangerous, this is appropriate. For a shopping cart, this seems like overkill. If our shopping cart system experienced a failure and became unavailable, users would not be able to add or remove things from the cart. They also couldn't check out. This means our startup would lose money! Perhaps it's not so important that our clients' shopping carts be completely synchronized across the entire world at all times. After all, how often are people going to be doing such wanderlust shopping?
+
+This is an important moment. By thinking about our specific problem, we've realized a compromise we're willing to make: our users always need to be able to add things, remove things, and checkout. In other words, our service needs to be as *available* as possible. Servers don't necessarily need to agree all the time. We'd like them to, but the system shouldn't shut down if they don't. We'll find a way to deal with it.
+
+Turns out there's a company out there called Amazon.com - and they've been having a similar problem. Amazon sells things on their website too, and users can add and remove things from their cart. Amazon has lots of servers spread out across the world. They also have quite a few customers. They need to ensure their customers' carts are robust: if/when servers fail or lose communication with one another, a "best-effort" should be made to display the customer's cart. Amazon acknowledges that failure, latency, or HyperLoop-traveling users can cause inconsistent cart data, depending on which server you ask. How does Amazon resolve these issues?
+
+## Dynamo
+Amazon built DynamoDB {% cite Dynamo --file langs-consistency %}, which is basically a big distributed hash table. In other words, it's a hashmap spread across multiple computers. A user's cart would be stored as a value under the user's username as the key. (`{'james': {'candle', 'skateboard'}}`) When a user adds a new item to her cart, the cart data is replicated across a multiple machines within the network. If the client changes locations then performs another write, or if a few machines fail and later recover, it's possible for different machines to have different opinions about the state of a given user's cart.
+
+Dynamo has a rather unique way of dealing with these types of conflicts. Since Dynamo always wants to be available for both writes and reads (add/removes, viewing/checkouts, resp) it must have a way of combining inconsistent data. Dynamo chooses to perform this resolution at read time. When a client performs a `get()` on the user's cart, Dynamo will take the multiple conflicting carts and push them up to the application! Huh? I thought Dynamo resolves this for the programmer!? Actually, Dynamo is a generic key-value store. It detects inconsistencies in the data - but once it does, it simply tells the application (in this case the application is the shopping cart code) that there are some conflicts. The application (shopping cart, in this case) is free to resolve these inconsistencies as it pleases.
+
+How should Amazon's shopping cart procede with resolution? It may be fed two cart states like so:
+
+```
+James's Cart V1 | James's Cart V2
+-----------------------------------
+Red Candle | Red Candle
+Blue Skateboard | Green Umbrella
+```
+
+Amazon doesn't want to accidently *remove* anything from your cart, so it errs on the side of inclusion. If given this particular conflict, you would see:
+
+```
+James's Cart
+------------
+Red Candle
+Blue Skateboard
+Green Umbrella
+```
+
+It's important to understand that Amazon has multiple machines storing the contents of your cart. These machines are asynchronously communicating in order to tell each other about updates they've received. Conflicts like this can happen when you try to read before the nodes have had time to gossip about your cart. More likely, however, is the situation in which one of the machines holding your cart goes offline and missing some updates. When it comes back online, you try to read, and this resolution process must occur.
+
+### Good & Bad
+What do we love about Dynamo? It's a highly available key-value store. It replicates data well, and according to the paper, has high uptime and low latency. We love that it's *eventually consistent*. Nodes are constantly gossiping, so given enough time (and assuming failures are resolved), nodes' states will eventually converge. However, this property is *weak*. It's weak because when failures & conflicts occur, and [and they will occur](https://www.youtube.com/watch?v=JG2ESDGwHHY), it's up to the application developer to figure out how to handle it. Given a conflict, there isn't a one-size-fits-all solution for resolving them. In the case of the shopping cart, it's relatively trivial. But as a programmer, every time you use DynamoDB for a different purpose you need to consider your resolution strategy. The database doesn't provide a general solution.
+
+Instead of constructing an all-purpose database and forcing the burden of resolution on programmers, what if we constructed multi-purpose (read: multi, not *all*) data structures that required no manual resolution? These data structures would resolve conflicts inherently, themselves, and depending on your application you could choose which data structure works best for you.
+
+Let's try this transfiguration on the shopping cart. Let's strip it down: how does Amazon handle resolution, really? It treats shopping cart versions as sets of items. In order to perform resolution, Amazon unions the two sets.
+
+```
+{ Red Candle, Blue Skateboard } U { Red Candle, Green Umbrella } == { Red Candle, Blue Skateboard, Green Umbrella }
+```
+
+Using this knowledge, let's try to construct our own shopping cart that automatically resolves conflicts.
+
+(Unfortunately Amazon has a leg up on our startup. Their programmers have figured out a way to add multiple instances of a single item into the cart. Users on our website can only add one "Red Candle"" to their shopping cart. This is due to a fundamental limitation in the type of CRDT I chose to exemplify. It's quite possible to have a fully functional cart. Take a look at LWW-Sets.)
+
+### Example
+
+Let's take a look at the following Javascript. For simplicity's sake, let's pretend users can only add things to their shopping cart.
+
+```javascript
+class Cart {
+ constructor(peers, socket) {
+ this.mySocket = socket;
+ this.peers = peers;
+ this.items = new Set();
+ }
+
+ addItem(item) {
+ this.items.add(item);
+ }
+
+ synchronize() {
+ peers.forEach(function(peer) {
+ peer.send(items);
+ });
+ }
+
+ receiveState(items) {
+ this.items = this.items.union(items);
+ }
+
+ run() {
+ var clientAddition = Interface.nonBlockingReceiveInput(); // invented
+ if (clientAddition !== undefined) {
+ this.addItem(clientAddition);
+ }
+ var receivedState = mySocket.nonBlockingRead(); // invented
+ if (receivedState !== undefined) {
+ this.receiveState(receivedState);
+ }
+ synchronize();
+ sleep(10);
+ run();
+ }
+}
+
+// theoretical usage
+
+var socket = new UDPSocket(); // invented
+var cart = new Cart(peerSockets, socket); // peerSockets is an array of UDP sockets
+cart.run();
+cart.items // the cart's items
+```
+
+Here is an (almost) fully functional shopping cart program. You can imagine this code running across multiple nodes scattered over the world. The meat of the program lies in the `run()` method. Let's walk through that:
+ 1. Program receives an addition to the cart from the user.
+ 2. Program adds that item to the current local state if it exists.
+ 3. Program checks its UDP socket for any messages.
+ 4. If it received one, it's means another instance of this program has sent us its state. What is state in this case? Simply a set of cart items. Let's handle this set of items by unioning it with our current set.
+ 5. Synchronize our current state by sending our state to every peer that we know about.
+ 6. Sleep for 10 seconds.
+ 7. Repeat!
+
+Hopefully it's clear that if a client adds an item to her cart in Beijing and then 10 seconds later checks her cart in Paris, she should see the same thing. Well, not exactly - remember, the network is unreliable, and Beijing's `synchronize` messages might have been dropped. But no worries! Beijing is `synchronizing` again in another 10 seconds. This should remind you of Dynamo's gossiping: nodes are constantly attempting to converge.
+
+Both systems are eventually consistent - the difference here is our Javascript shopping cart displays *strong* eventual consistency. It's strong because it requires no specialized resolution. When a node transmits its state to another node, there's absolutely no question about how to integrate that state into the current one. There's no conflict.
+
+### The Intern
+Unfortunately Jerry, the intern, has found your code. He'd like to add `remove` functionality to the cart. So he makes the following changes:
+
+```javascript
+class Cart {
+ constructor(peers, socket) {
+ this.mySocket = socket;
+ this.peers = peers;
+ this.items = new Set();
+ }
+
+ addItem(item) {
+ this.items.add(item);
+ }
+
+ synchronize() {
+ peers.forEach(function(peer) {
+ peer.send(items);
+ });
+ }
+
+ receiveState(items) {
+ // JERRY WAS HERE
+ this.items = this.items.intersection(items);
+ // END JERRY WAS HERE
+ }
+
+ run() {
+ var clientAddition = Interface.nonBlockingReceiveInput(); // invented
+ if (clientAddition !== undefined) {
+ this.addItem(clientAddition);
+ }
+ // JERRY WAS HERE
+ var clientDeletion = Interface.nonBlockingReceiveInput():
+ if (clientDeletion !== undefined) {
+ this.items.delete(clientDeletion);
+ }
+ // END JERRY WAS HERE
+ var receivedState = mySocket.nonBlockingRead(); // invented
+ if (receivedState !== undefined) {
+ this.receiveState(receivedState);
+ }
+ synchronize();
+ sleep(10);
+ run();
+ }
+}
+
+// theoretical usage
+
+var socket = new UDPSocket(); // invented
+var cart = new Cart(peerSockets, socket); // peerSockets is an array of UDP sockets
+cart.run();
+cart.items // the cart's items
+```
+
+Uh-oh. Can you spot the problem? Let's break it down. In the original code, the current node's cart items were *unioned* with the communicating node's cart. Since there was no deletion, carts could only ever expand. Here was Jerry's plan:
+
+```
+> I want to delete things. If you delete something from node 1, and intersect it's state from node 2, the item will be deleted from node 2 as well.
+
+Node 1: { A, B }
+Node 2: { A, B }
+
+delete(Node2, A)
+
+Node 1: { A, B }
+Node 2: { B }
+
+Node1 = Node1.intersect(Node2)
+Node1: { B }
+```
+
+The reasoning is sound. However, there's a huge issue here. We've flipped the `union` operation on its head! Now, carts can *never* expand! They can only either stay the same size or shrink. So although Jerry's contrived example works, it's impossible to ever reach the beginning states of Node 1 and Node 2 unless those two nodes receive *the same writes*. Let's take it from the top:
+
+```
+Node 1: { }
+Node 2: { }
+
+add(Node1, A)
+add(Node2, B)
+
+Node 1: { A }
+Node 2: { B }
+
+Node1_temp = Node1.intersect(Node2)
+Node2_temp = Node2.intersect(Node1)
+Node1 = Node1_temp
+Node2 = Node2_temp
+
+Node 1: { }
+Node 2: { }
+```
+
+This is pretty nasty. Jerry has come along and with a few lines of code he's obliterated our nice strong eventually consistent code. Surely there's a better way.
+
+### Guarantees
+The original Javascript we wrote down exhibits the property from Section 6 known as logical *monotonicity*. The union operation ensures that a given node's state is always "greater than or equal to" the states of the other nodes. However, how can we be *sure* that this property is maintained throughout the development of this program? As we've seen, there's nothing stopping an intern from coming along, making a mindless change, and destroying this wonderful property. Ideally, we want to make it impossible (or at least very difficult) to write programs that violate this property. Or, at the very least, we want to make it very easy to write programs that maintain these types of properties.
+
+But where should these guarantees live? In the above Javascript example, the guarantees aren't guarantees at all, really. There's no restriction on what the programmer is allowed to do - the programmer has simply constructed a program that mirrors guarantees that she has modeled in her brain. In order to maintain properties such as *monotonicity*, she must constantly check the model in her brain against the code. We haven't really helped the programmer out that much - she has a lot of thinking to do.
+
+Databases such as PostgreSQL have issues like this as well, though they handle them quite differently, masters may need to ensure that writes have occurred on every slave before the database becomes available for reading. A database system like this has pushed consistency concerns to the IO-level, completely out of the users control. They are enforced on system reads and system writes. This approach gives programmers no flexibility: as demonstrated with our shopping cart example, there's no need for these type of restrictions; we can tolerate inconsistency in order to maintain availability.
+
+Why not push the consistency guarantees in between the IO-level and the application-level? {% cite ConsistencyWithoutBorders --file langs-consistency %} Is there any reason why you as the programmer couldn't program using tools that facilitate these types of monotonic programs? If you're familiar with formal systems -- why not construct a formal system (programming language / library) in which every theorem (program) is formally guaranteed to be monotonic? If it's *impossible* to express a non-monotonic program, the programmer needn't worry about maintaining a direct mapping between their code and his or her mental model.
+
+Wouldn't it be great if tools like this existed?
+
+### Bloom
+Before talking about such tools, I'd like you to forget almost everything you know about programming for a second (unless of course you've never programmed in a Von Neumann-based language in which you sequentially update pieces of memory; which, by the way, you have).
+
+Imagine the following scenario: you are "programming" a node in a cluster of computers. All of the other computers work as expected. When you receive a message (all messages will include an integer), your task is to save the message, increment the integer, and resend the message back to its originator. You must also send messages you've received from `stdin`. Unfortunately, the programming environment is a little strange.
+You have access to five buffers:
+* Messages you have received in the last 5 seconds
+* Inputs you've received from `stdin` in the last 5 seconds
+* An outgoing messages buffer: flushed & sent every 5 seconds
+* A bucket of saved messages: *never* flushed
+
+However, you only have access to these buffers *every 5 seconds*. If messages are formatted as such: `(SOURCE, INTEGER, T)`, your buffers might look like when `t = 0`. (`t` is the number of seconds elapsed)
+
+```
+<T = 0>
+RECV-BUFFER: [(A, 1, 0), (B, 2, 0)]
+RSTDIN-INPUTS: [(A, 5, 0), (C, 10, 0)]
+SEND-BUFFER: []
+SAVED: [(D, -1, 0), (E, -100, 0)]
+```
+
+If you don't write any code to manipulate these buffers, when `t = 5`, your buffers might look like:
+
+```
+<T = 5>
+RECV-BUFFER: [(C, 10, 5)]
+STDIN-INPUTS: [(X, 1, 5)]
+SEND-BUFFER: []
+SAVED: [(D, -1, 0), (E, -100, 0)]
+```
+
+You can see that from `t = 0` to `t = 5`, you received one message from `C` and someone typed a message to `X` via `stdin`.
+
+Remember our goals?
+* Save received messages from the network
+* Send out messages received from `stdin`
+* For all received network messages, increment the integer and resend it back to the originator
+
+In Javascript, perhaps you code up something like this:
+
+```javascript
+onFiveSecondInterval(function() {
+ recvBuffer.forEach(function(msg) {
+ savedBuffer.push(msg); // save message
+ let newMsg = msg.clone()
+ newMsg.integer++; // increment recv'd message
+ newMsg.flipSourceDestination()
+ sendBuffer.push(newMsg); // send it out
+ });
+
+ stdinInputBuffer.forEach(function(msg) {
+ sendBuffer.push(msg); // send stdin message
+ });
+});
+```
+
+or Ruby:
+
+```ruby
+on_five_second_interval do
+ recv_buffer.each do |msg|
+ saved_buffer << msg
+ new_msg = msg.clone
+ new_msg.integer += 1
+ new_msg.flip_source_destination
+ send_buffer << new_msg
+ end
+
+ stdin_input_buffer.each do |msg|
+ send_buffer << msg
+ end
+end
+```
+
+We have expressed this model using an event-driven programming style: the callbacks are triggered when `t % 5 = 0`: when the buffers populate & flush.
+
+Notice we perform a few "copies". We read something from one buffer and place it into another one, perhaps after applying some modification. Perhaps we place a message from a given buffer into two buffers (`recv_buffer` to `saved_buffer` & `send_buffer`).
+
+This situation screams for a more functional approach:
+```ruby
+on_five_second_interval do
+ saved_buffer += recv_buffer # add everything in recv_buffer to saved_buffer
+
+ send_buffer += recv_buffer.map do |msg| # map over the recv_buffer, increment integers, add to send_buffer
+ new_msg = msg.clone
+ new_msg.integer += 1
+ new_msg.flip_source_destination # send to originator
+ new_msg # this block returns new_msg
+ end
+
+ send_buffer += stdin_input_buffer # add stdin messages to the send buffer
+end
+```
+
+After this block/callback is called, the system automatically flushes & routes messages as described above.
+
+Bloom {% cite Bloom --file langs-consistency %}, a research language developed at UC Berkeley, has a similar programming model to the one described above. Execution is broken up into a series of "timesteps". In the above example, one "timestemp" would be the execution of one `on_five_second_interval` function. Bloom, like the theoretical system above, automatically flushes and populates certain buffers before and after each timestep. In the above example, 5 seconds was an arbitrary amount of time. In Bloom, timesteps (rounds of evaluation) are logical tools - they may happen every second, 10 seconds, etc. Logically, it shouldn't affect how your program executes. In reality, Bud's timesteps correspond to evaluation iterations. Your code is evaluated, executed, and the process repeats.
+
+So what does a Bloom program look like? Bloom's prototypal implementation is called Bud and is implemented in Ruby. There are two main parts to a Bloom program:
+1. User defined buffers: rather than the four buffers I gave you above, Bloom users can define their own buffers. There are different types of buffers depending on the behavior you desire:
+ * `channel`: Above, `recv_buffer` and `send_buffer` would be considered channels. They facilitate sending network messages to and from other nodes. Like the messages above, messages sent into these channels contain a "location-specifier", which tells Bloom where the message should be sent. If you wanted to send a message to `A`, you could push the message `(@A, 10)` into your send buffer (in Ruby, `["@A", 10]`). The `@` denotes the location-specifier. At the end of the timestep (or callback execution in the above example), these buffers are flushed.
+ * `table`: Above, `saved_buffer` would be considered a table. The contents of tables persist across timesteps, which means tables are never flushed.
+2. Code to be executed at each timestep. A Bloom (Bud) program can be seen as the inside of the block passed to `on_five_second_interval`. In fact, it looks very similar, as we'll see.
+
+For the purposes of this chapter, let's assume `stdin_input_buffer` is a special kind of channel in which are sent in via `stdin`. Let's also assume this channel exists in all Bloom programs.
+
+Let's take a look at an example Bud program.
+
+First, let's declare our state.
+
+```ruby
+module Incrementer
+ def state
+ channel :network_channel ['@dst', 'src', 'integer']
+ table :saved_buffer ['dst', 'src', 'integer']
+ # implied channel :stdin_input_buffer ['@dst', 'src', 'integer']
+ end
+end
+```
+
+The first line of `state` means: declare a channel called `network_channel` in which messages are 3-tuples. The first field of the message is called `dst`, the second `src`, and the third is called `integer`. `@` is our location-specifier, so if a program wants to send a message to a node at a given identifier, they will place it in the first `dst` field. For example, a message destined for `A` would look like `['A', 'me', 10]`. The `@` denotes the location-specifier within the collection's "schema".
+
+The second line means: declare a table (persists) called `saved_buffer` in which messages follow the same format as `network_channel`. There's no location specifier since this collection is not network-connected.
+
+You can think of the Ruby array after the channel name as the "schema" of that collection.
+
+Notice how we only have one network channel for both receiving and sending. Before, we had two buffers, one for sending and one for receiving. When we place items *into* `network_channel`, Bud will automatically send messages to the appropriate `@dst`.
+
+Next, let's write our code. This code will be executed at every timestamp. In fact, you can think of a Bud program as the code inside of a timestamp callback. Let's model the raw Ruby code we saw above.
+
+```ruby
+module Incrementer
+ def state
+ channel :network_channel ['@dst', 'src', 'integer']
+ table :saved_buffer ['dst', 'src', 'integer']
+ # implied channel :stdin_input_buffer ['@dst', 'src', 'integer']
+ end
+
+ declare
+ def increment_messages
+ network_channel <~ network_channel.map { |x| [x.src, x.dst, x.integer + 1] }
+ end
+
+ declare
+ def save_messages
+ saved_buffer <= network_channel
+ end
+
+ declare
+ def send_messages
+ network_channel <~ stdin_input_buffer
+ end
+end
+```
+
+Don't panic. Remember - the output of this program is identical to our Ruby callback program from earlier. Let's walk through it step by step.
+```ruby
+declare
+def increment_messages
+ network_channel <~ network_channel.map { |x| [x.src, x.dst, x.integer] }
+end
+```
+Here, we take messages we've received from the network channel and send them back into the network channel. The `<~` operator says "copy all of the elements in the right-hand-side and eventually send them off onto the network in the channel on the left-hand-side". So, we map over the contents of network channel *in the current timestep*: switching the `src` and `dst` fields, and incrementing the integer. This mapped collection is passed back into the network channel. Bud will ensure that those messages are sent off at some point.
+
+```
+declare
+def save_messages
+ saved_buffer <= network_channel
+end
+```
+In `save_messages`, we use the `<=` operator. `<=` says "copy all of the elements in the right-hand-side and add them to the table on the left-hand-side." It's important to note that this movement occurs *within the current timestep*. This means if `saved_buffer` is referenced elsewhere in the code, it will include the contents of `network_channel`. If we had used the `<+` operator instead, the contents of `network_channel` would show up in `saved_buffer` in the *next* timestep. The latter is useful if you'd like to operate on the current contents of `saved_buffer` in the current timestep but want to specify how `saved_buffer` should be updated for the next timestep.
+
+Remember, all of this code is executed in *each* timestep - the separation of code into separate methods is merely for readability.
+
+```
+declare
+def send_messages
+ network_channel <~ stdin_input_buffer
+end
+```
+
+`send_messages` operates very much like `increment_messages`, except it reads the contents of `stdin_input_buffer` and places them into the network channel to be sent off at an indeterminite time.
+
+#### Details
+
+Examine Bloom's "style". Compare it to your standard way of programming. Compare it to the Javascript & Ruby timestep/callback examples. Bloom has a more "declarative" style: what does this mean? Look at our Javascript:
+
+```javascript
+onFiveSecondInterval(function() {
+ recvBuffer.forEach(function(msg) {
+ savedBuffer.push(msg); // save message
+ let newMsg = msg.clone()
+ newMsg.integer++; // increment recv'd message
+ newMsg.flipSourceDestination();
+ sendBuffer.push(newMsg); // send it out
+ });
+
+ stdinInputBuffer.forEach(function(msg) {
+ sendBuffer.push(msg); // send stdin message
+ });
+});
+```
+
+"Every five seconds, loop over the received messages. For each message, do this, then that, then that." We are telling the computer each step we'd like it to perform. In Bud, however, we describe the state of tables and channels at either the current or next timestep using operators and other tables and channels. We describe what we'd like our collections to include and look like, rather than what to do. You declare what you'd like the state of the world to be at the current instant and at following instants.
+
+#### Isn't this chapter about consistency?
+
+It's time to implement our shopping cart in Bloom. We are going to introduce one more collection: a `periodic`. For example, `periodic :timer 10` instantiates a new periodic collection. This collection becomes "populated" every 10 seconds. Alone, it's not all that useful. However, when `join`'d with another table, it can be used to perform actions every `x` seconds.
+
+```ruby
+module ShoppingCart
+ include MulticastProtocol
+
+ def state
+ table :cart ['item']
+ channel :recv_channel ['@src', 'dst', 'item']
+ # implied channel :stdin_input_buffer ['item']
+ periodic :timer 10
+ end
+
+ declare
+ def add_items
+ cart <= stdin_input_buffer
+ end
+
+ declare
+ def send_items
+ send_mcast <= join([cart, timer]).map { |item, timer| item }
+ end
+
+ declare
+ def receive_items
+ cart <+ recv_channel.map { |x| x.item }
+ end
+end
+```
+
+`send_mcast` is a special type of channel we receive from the `MulticastProtocol` mixin. It sends all items in the right-hand-side to every known peer.
+* `add_items`: receive items from stdin, add them to the cart
+* `send_items`: join our cart with the 10-second timer. Since the timer only "appears" every 10 seconds, this `join` will produce a result every 10 seconds. When it does, send all cart items to all peers via `send_mcast`.
+* `receive_items`: when we receive a message from a peer, add the item to our cart.
+
+Functionally, this code is equivalent to our working Javascript shopping cart implementation. However, there are a few important things to note:
+* In our Javascript example, we broadcasted our entire cart to all peers. When a peer received a message, they unioned their current cart with the received one. Here, each node broadcasts each element in the cart. When a node receives an item, it adds it to the current cart. Since tables are represented as sets, repeated or unordered additions do not matter. You can think of `{A, B, C}.add(D)` as equivalent to `{A, B, C}.union({D})`.
+* You cannot add items twice. Since tables are represented as sets and we simply add items to our set, an item can only ever exist once. This was true of our Javascript example as well.
+* You still cannot remove items!
+
+Bloom has leveraged the montononic, add-only set and constructed a declarative programming model based around these sets. When you treat everything as sets (not unlike SQL) and you introduce the notion of "timestemps", you can express programs as descriptions of state rather than an order of operations. Besides being a rather unique model, Bloom presents an accessible and (perhaps...) safe model for programming eventually consistent programs.
+
+#### Sets only?
+Bloom's programming model is built around the set. As Aviral discussed in the previous chapter, however, sets are not the only monotonic data structures. Other CRDTs are incredibly useful for programming eventually consistent distributed programs.
+
+Recall that a *bounded join semilattice* (CRDT) can be represented as a 3-tuple: `(S, U, ⊥)`. `S` is the set of all elements within the semilattice. `U` is the `least-upper bound` operation. `⊥` is the "least" element within the set. For example, for add-only sets, `S = the set of all sets`, `U = union` and `⊥ = {}`. Elements of these semilattices, when `U` is applied, can only "stay the same or get larger". Sets can only stay the same size or get larger - they can never rollback. For some element `e` in `S`, `e U ⊥` must equal `e`.
+For a semilattice we'll call `integerMax`, `S = the set of all integers`, `U = max(x, y)`, and `⊥ = -Infinity`. Hopefully you can see that elements of this lattice (integers) "merged" with other elements of this lattice never produce a result less than either of the merged elements.
+
+These semilattices (and many more!) can be used to program other types of distributed, eventually consistent programs. Although sets are powerful, there might be more expressive ways to describe your program. It's not difficult to imagine using `integerMax` to keep a global counter across multiple machines.
+
+Unfortunately, Bloom does not provide support for other CRDTs. In fact, you cannot define your own datatypes at all. You are bound by the collections described.
+
+Bloom<sup>L</sup>{% cite BloomL --file langs-consistency %}, an addendum to the Bloom language, provides support for these types of data structures. Specifically, Bloom<sup>L</sup> does two things:
+* Adds a number of built-in lattices such as `lmax` (`integerMax`), `lmin`, etc.
+* Adds an "interface" for lattices: the user can define lattices that "implement" this interface.
+
+This interface, if in an OO language like Java, would look something like:
+
+```java
+interface Lattice {
+ static Lattice leastElement();
+ Lattice merge(Lattice a, Lattice b);
+}
+```
+
+Heather: [I am purposely leaving out morphisms & monotones for the sake of simplicity.]
+
+This provides the user with much more freedom in terms of the types of Bloom programs she can write.
+
+#### Review
+
+Bloom aims to provide a new model for writing distributed programs. And since bloom only allows for monotonic data structures with monotonicity-preserving operations, we're safe from Jerry the intern, right?
+
+Wrong. Unfortunately, I left out an operator from Bloom's set of collection operators. `<-` removes all elements in the right-hand-size from the table in the left-hand-side. So Bloom's sets are *not* add-only. As we've seen from Jerry's work on our original Javascript shopping cart implementation, naively attempting to remove elements from a distributed set is not a safe operation. Rollbacks can potentially destroy the properties we worked so hard to achieve. So what gives? Why would the Bloom developers add this operation?
+
+Despite putting so much emphasis on consistency via logical monotonicity, the Bloom programmers recognize that your program might need *some* coordination.
+
+In our example, we don't require coordination. We accept the fact that a user may ask a given node for the current state of her shopping cart and may not receive the most up-to-date response. There's no need for coordination, because we've used our domain knowledge to accept a compromise.
+
+For our shopping cart examples: when a client asks a given node what's in her cart, that node will respond with the information it's received so far. We know this information won't be *incorrect*, but this data could be *stale*. That client might be missing information.
+
+The Bloom team calls points like the one above, the user asking to checkout the contents at the cart of a given node, *points of order*. These are points in your program where coordination may be required - depending on when and who you ask, you may receive a different response. In fact, the Bloom developers provide analysis tools for identifying points of order within your program. There's no reason why you couldn't implement a non-monotonic shopping cart in which all nodes must synchronize before giving a response to the user. The Bloom analysis tool would tell you where the points of order lie in your program, and you as the programmer could decide whether or not (and how!) to add coordination.
+
+So what does Bloom really give us? First off, it demonstrates an unusual and possibly more expressive way to program distributed systems. Consistency-wise, it uses sets under the hood for its collections. As long as you shy away from `<-` operator, you can be confident that your collections will only monotonically grow. Since the order of packets is not guaranteed, structuring these eventually consistent applications is reasonably easy within Bloom. Bloom<sup>L</sup> also gives us the power to define our own monotonic data structures by "implementing" the lattice interface.
+
+However, Bloom makes it easy to program non-monotonic distributed programs as well. Applications may require coordination and the `<-` operator in particular can cause serious harm to our desired formal properties. Luckily, Bloom attempts to let the programmer know exactly when coordination may be required within their programs. Whenever an operation may return a stale or non-up-to-date value, Bloom's analysis tools let the programmer know.
+
+Another thing to consider: Bloom<sup>L</sup>'s user-defined lattices are just that - user-defined. It's up to the programmer to ensure that the data structures that implement the lattice interface are actually valid lattice structures. If your structures don't follow the rules, your program will behave in some seemingly strange ways.
+
+Currently Bloom exists as a Ruby prototype: Bud. Hypothetically speaking, there's nothing stopping the programmer from writing normal, sequentially evaluated Ruby code within Bud. This can also cause harm to our formal properties.
+
+All in all, Bloom provides programmers with a new model for writing distributed programs. If the user desires monotonic data structures and operations, it's relatively easy to use and reason about. Rather than blindly destroying the properties of your system, you will know exactly when you introduce a possible point of order into your program. It's up to you to decide whether or not you need to introduce coordination.
+
+### Lasp
+Lasp {% cite Lasp --file langs-consistency %}is an Erlang library which aims to facilitate this type of "disorderly" programming.
+
+Lasp provides access to myriad of CRDTs. It does not allows user-defined CRDTs (lattices), but the programmer can have confidence that the CRDTs obey the lattice formal requirements.
+
+A Simple Lasp Program is defined as either a:
+* Single CRDT instance
+* A "Lasp process" with *m* inputs, all Simple Lasp Programs, and one output CRDT instance
+
+For those of you unfamiliar with Erlang: a *process* can be thought of as an independent piece of code executing asynchronously. Processes can receive messages and send messages to other processes. Process can also subscribe (I think) to other processes' messages.
+
+Programming in Erlang is unique in comparison to programming in Ruby or Javascript. Erlang processes are spun off for just about everything - and they are independent "nodes" of code acting independently while communicating with other processes. Naturally, distributed systems programming fits well here. Processes can be distributed within a single computer or distributed across a cluster of computers. So communication between processes may move over the network.
+
+Distribution of a data structure, then, means the transmission of a data structure across network-distributed processes. If a client asks for the state of the shopping cart in Beijing, the processes located on the computer in Beijing will respond. However, the processes in New York may disagree. Thus, our task is to distribute our data structures (CRDTs, right?) across distributed processes.
+
+So, what's a "Lasp process"? A Lasp process is a process that operates on lattice elements, or CRDTs. Three popular Lasp processes are `map`, `fold`, and `filter`.
+
+* `map`: If you're familiar with functional programming, these functions shouldn't appear too foreign. `map` spins off a never-ending process which applies a user-supplied `f` to all the replicas of a given CRDT this processes receives.
+* `fold`: Spins off a process that continously folds input CRDT values into another CRDT value using a user-provided function.
+* `filter`: Spins off a process that continously picks specific CRDT input values based on a user-provided filtering function.
+
+Drawing parallels to our mock-Bloom-Ruby-callback implementation, we remember that CRDT modifications and movements can be modeled using functional styles. In Bloom, we dealt with mapping values from "collections" to other "collections". These collections were backed by CRDT-like sets.
+
+Here, we are mapping "streams" of CRDT instances to other CRDT instances using the same functional programming methods.
+
+However, here, the stream manipulations occcur within unique processes distributed across a network of computers. These processes consume CRDTs and produce new ones based on functions provided by the user.
+
+There's one hiccup though: the user can't provide *any* function to these processes. Since our datatypes must obey certain properties, the functions that operate on our datas must preserve these properties.
+
+Recall that within a lattice, a partial order exists. One element is always `<=` another element. For example, with add-only sets, `{A} <= {A} <= {A, B} <= {A, B} <= {A, B, C}`. A *monotonic* function that operates over the domain of add-only sets must preserve this partial ordering. For example - if `{A} <= {A, B}` and `f` is a monotonic function that operates over add-only sets, `f({A}) <= f({A, B})`.
+
+This ensures the preservation of our consistency properties across our ever-interacting processes.
+
+#### A Library
+
+Remember that Lasp is an Erlang *library*. Within your existing Erlang program, you're free to drop in some interacting Lasp-processes. These processes will communicate using CRDTs and functions over CRDTs. As such, your Lasp sub-program is guaranteed to exhibit strong eventual consistency properties.
+
+However, the rest of your Erlang program is not. Since Lasp is embeddable, it has no control over the rest of your Erlang program. You must be sure to use Lasp in a safe way. But since it doesn't provide the programmer with the ability to perform non-monotonic operations within the Lasp-context, the programmer can have significant confidence in the eventual consistency of the Lasp portion of the program. We still aren't totally safe from Jerry the intern, since Jerry can modify our outer-Erlang to do some dangerous things.
+
+Bloom provided a new model for distributed programming, where Lasp aims to provide existing distributed systems with a drop-in solution for adding eventually consistent parts to their systems.
+
+### Utilization
+
+Compare Lasp and Bloom:
+
+Lasp
+* An Erlang library, meant to be used in every-day Erlang programs.
+* Built-in CRDTs. Does not allow user-defined CRDTs (for now).
+* All data structures are CRDTs and all operations are logically monotonic.
+* Thus, it's essentially impossible to construct a non-monotonic program *using only the Lasp library*.
+* It is possible to use Lasp in a non-monotonic way with disrupting outer Erlang code.
+* Follows well-known functional programming patterns and is compatible with optimal Erlang style.
+
+Bloom:
+* Aims to be a full-featured language. Is not meant to be embeddable.
+* Built-in set collections only. Allows user-defined CRDTs.
+* Its sets are not add-only and thus not exclusively logically monotonic. User-defined lattices carry no formal proofs of their consistency gaurantees.
+* It's possible to construct non-monotonic programs. Using the `<-` operator, for example.
+* With the prototype, Bud, it's possible to use normal Ruby code to disrupt Bloom's properties. But this is more a result of the prototype implementation, not the design.
+* Uses a unique programming model based on temporal logic.
+* Contains an analysis tool that tells programmers which points in their code might require coordination, depending on the consistency concerns of the application.
+
+Although they are fundamentally different in many ways, Lasp and Bloom accept a key reality: it's probably impossible to program using eventual consistency gaurantees only. It works for shopping carts, but there will always be situations where coordination between machines will need to occur. Lasp and Bloom's designs reflect the different approaches for dealing with this harsh truth.
+
+Lasp, on one hand, plans to be an embeddable eventually-consistent library. If you're an Erlang developer and you recognized a situation in which you can accept eventual consistent properties, you can reach for the Lasp library. Within your existing code, you can add communication mechanisms using Lasp and be confident of the properties advertised by eventual consistent systems. No need to change your entire system or re-write code in a different language. Since Lasp does not allow the expression of non-monotonic programs, you express non-monotonicity *outside* of the Lasp sections in your code.
+
+Bloom, on the other hand, aims to be an entirely new model for expressing distributed systems problems. By using CRDT-like sets for their collections, they can encourage a declarative way of programming without enforcing too much coordination. They even let the user define their own lattices with Bloom<sup>L</sup> to further encourage this type of programming. But since there will always be times where coordination is necessary, Bloom allows for operations that may require coordination. They even allow the user to perform non-monotonic operations such as `<-`. Bloom, in a way, must do this. They must provide the user with mechanisms for coordination, since they aim to create a new model for expressing distributed systems programs. Lasp is embeddable, so it can perform one specific job. Bloom is not, so it must allow many types of programs. In order to ameliorate this, Bloom provides the programmer with anaylsis tools to help the programmer identify points in the code that may not be totally safe. The programmer can then decide to coordinate or ignore these "points of order".
+
+Most programming languages are "general-use". This works for single machine programming. As the world moves toward distributed programming, programmers must adopt models / languages / libraries that are built for their domain. It forces serious thought on the part of the programmer: what *exactly* am I trying to achieve, and what am I willing to sacrifice?
+
+Bloom could potentially facilitate distributed systems programming through a new, temporal model. The Bloom developers have designed a language for a specific purpose: distributed programming. The Lasp developers take this philosophy even further: let's design a library for a specific subset of distributed systems programming. Although one goes deeper than the other, the two languages share an idea: languages / models should be build for subsets of the computing domain. Distributed systems produce difficult problems. When we put our heads together and develop tools to facilitate distributed systems programming (Bloom) and always *eventually consistent* distributed systems programming, programming gets easier. Fewer bugs pop up, and it becomes easier to formally reason about the behavior of our programs.
+
+When a language or model tries to do everything well, it cannot provide formal guarantees or tools to facilitate certain problem solving. Since different domains have totally different needs and issues to deal with, general purpose programming languages simply try to provide the minimum required for a wide variety software problems.
+
+If we shift our mindset as software developers and begin to develop and look for tools to help us with specific problems and domains of problems, we can leverage computers much more than we do today. Our tools can provide relevant feedback and help us design our systems. They can even provide formal properties that we need not question.
+
+Critically, it requires a narrowing of our problem domain. It means inspecting our problem and asking what we need, and what's not so important?
+
+In this chapter, we examined ways in which tools can help us leverage eventually consistent distributed systems. But there's no reason why this philosophy couldn't be applied to other subsections of the CAP pyramid. In fact, there's no reason why this philosophy couldn't be applied to other areas of computing in general. Why are both video games and distributed systems programmed using the same language & models?
+
+Even if you don't encounter consistency issues in your day-to-day life, this idea applies to many areas of computing and tools in general. Hopefully you can begin to ask yourself and those around you: what tasks are we trying to accomplish, and how can our tools help us accomplish them?
## References
-{% bibliography --file langs-consistency %} \ No newline at end of file
+{% bibliography --file langs-consistency %}
diff --git a/chapter/8/big-data.md b/chapter/8/big-data.md
index 0073cc2..a04c72a 100644
--- a/chapter/8/big-data.md
+++ b/chapter/8/big-data.md
@@ -706,8 +706,8 @@ Spark achieves fault tolerant, high throughput data streaming workloads in real-
*Apache Mesos*
-Apache Mesos{%cite hindman2011mesos --file big-data%} is an open source heterogenous cluster/resource manager developed at the University of California, Berkley and used by companies such as Twitter, Airbnb, Netflix etc. for handling workloads in a distributed environment through dynamic resource sharing and isolation. It aids in the deployment and management of applications in large-scale clustered environments. Mesos abstracts node allocation by combining the existing resources of the machines/nodes in a cluster into a single pool and enabling fault-tolerant elastic distributed systems. Variety of workloads can utilize the nodes from this single pool voiding the need of allocating specific machines for different workloads. Mesos is highly scalable, achieves fault tolerance through Apache Zookeeper {%cite hunt2010zookeeper --file big-data%} and is a efficient CPU and memory-aware resource scheduler.
+Apache Mesos{%cite hindman2011mesos --file big-data%} is an open source heterogenous cluster/resource manager developed at the University of California, Berkley and used by companies such as Twitter, Airbnb, Netflix etc. for handling workloads in a distributed environment through dynamic resource sharing and isolation. It aids in the deployment and management of applications in large-scale clustered environments. Mesos abstracts node allocation by combining the existing resources of the machines/nodes in a cluster into a single pool and enabling fault-tolerant elastic distributed systems. Variety of workloads can utilize the nodes from this single pool voiding the need of allocating specific machines for different workloads. Mesos is highly scalable, achieves fault tolerance through Apache Zookeeper {%cite hunt2010zookeeper --file big-data%} and is a efficient CPU and memory-aware resource scheduler.
*Alluxio/Tachyon*