AN X APPLICATION
Above is an example of a simple, but complete, X application. Functionally, this application receives orders, initializes the state of and persists the orders in its durable in-memory store and dispatches a message to the rest of the world notifying that a new order has been created. Non-functionally, this application is fully fault tolerant; will incur zero data or message loss on network, process or machine failure; will ensure that inbound messages are delivered to the application message handler once and only once even across failures and performs at order-to-event latencies of less than 30us when configured for low latency or upwards of 100k orders per second when configured for throughput all with zero garbage collections.
To answer some questions that you likely have after seeing above...
Q: Where are the message classes coming from?
A: You define messages in an easy to use XML format based on X’s Application Data Modeling Language (X-ADML). We convert the XML into highly efficient message classes for you to use as regular Java classes in your code. The XML is parsed at build time to generate message classes for use by the application. X provides plug-ins for the popular build frameworks such as Maven and ANT to ensure a seamless integration of the code generation into the application build lifecycle.
Q: How easy is it to write a message handler?
A: Very easy. All you have to do is annotate the method that takes the message that the handler will be processing as a single argument. At startup, you supply the objects containing one or more annotated handlers to X. X will then prepare its dispatch table by discovering the handlers using the annotations. In the handler, work with the message and application state just as one would work with regular Java objects.
Q: How does X ensure zero data and message loss when the state is in memory?
A: X ensures zero message loss by using guaranteed message delivery i.e. messages are persisted in the messaging layer until stabilized by X at the application layer. Once stabilized at the application layer, X acknowledges receipt to the messaging layer which them removes the message from its persistent storage.
To ensure zero data loss, X organizes the application into a cluster comprised of one primary and zero or more backups. When a message is processed by the primary, X ensures that all the effects of the processing i.e. changes to the application state and outbound messages notifying the world of the results of the processing are replicated to the backup cluster members before the messaging layer is notified of this stability after which the inbound message can be removed from the messaging storage.
All of the above is performed by Atomic Event Processing (AEP) engine. Each X application gets an instance of the AEP engine. The AEP engine brokers traffic between the messaging layer and the application in a transactional manner as described above to ensure zero data loss and once and only once message delivery. AEP operates the transaction pipeline in a completely non-blocking manner resulting in extreme throughputs and remarkably low latencies.
X supports two HA models
- Event Sourcing
- State Replication
Each of the above accomplish the same goal - zero message and data loss in the face of failures - but using different means. With Event Sourcing, X replicates and processes the inbound messages concurrently on all cluster members. Outbound messages are suppressed on the backup members and only dispatched from the primary. The concurrent processing ensures parity of state and messaging metadata between all members of the cluster. With State Replication, X processes the message only on the primary. X replicates changes to the state made by the message handler, the outbound messages dispatched by the handler and the inbound message metadata to the backup members. This replication ensures parity of state and messaging metadata between all members of the cluster.
In both models, aside from the replication of the state across cluster members, X also journals the inbound messages (Event Sourcing) or state change events (State Replication) in a high performance, durable transaction log. Whether the log is written to synchronously or asynchronously is quorum driven. The journal is used to reconstitute state from a full downtime. The journal can be queried using SQL and can be replicated asynchronously using ICR (Inter-Cluster Replication) to a remote cluster.
Q: Do I have to code anything beyond above? How is the messaging machinery started? How is the application bootstrapped?
No, aside from putting the above handler in a Java class, you do not have to do anything. Everything else is driven by configuration.
Q: Do I have to use orderPool.get()? Can I not just new up orders?
A: Yes, you can new up order objects on the fly as is the regular Java practice. However, in doing so, even if the objects are long lived, you will be exercising the garbage collector at some point. For ultra-low latency applications, this can be a performance killer. To accommodate such applications, X exposes its internal zero-garbage facilities such as the object pooling and pre-allocation framework for use by the application towards zero-garbage code.
Q: How do I send a message?
A: Very easily. Create the message, populate it and use the AEP engine’s sendMessage() method to send the message. You don’t have to worry about the fact that the transaction machinery holds onto the outbound messages till replication is complete or discards them on backups in Event Sourcing. That is all transparently taken care of by X. You just send the message. We make sure that it is sent in a transactionally consistent manner.
Q: Can I use POJOs to store my state?
A: Yes, but only with Event Sourcing. With Event Sourcing, X does not need to have visibility into the application state since the state is regenerated on each cluster member through parallel processing. For high performance applications, X provides zero garbage, high performance implementations of Java maps and collections that you could use (or you could use your favorite third party implementation if you prefer)
With State Replication, X needs to have visibility into the application state since it needs to track the changes being made to the state. For this, the application state graph needs to be modelled using X-ADML (like with messages). X generates the state classes from the ADM model which the application handler would manipulate just like regular Java classes.
Q: What are extractors and populators?
A: X believes in a design pattern that clearly separates the domain and messaging models. The domain model is considered private to the application and communication with the outside world is performed exclusively using message passing. This allows for contractual and agile interaction between the multiple applications in a multi-agent system since the domain and messaging models can evolve independently. However, to honor such a model, the boundary layer between the messaging and domain models needs to be very performant (since data copying is expensive) and flexible (since the messaging and domain models can evolve independently). The use of extractors and populators to bridge the domain and messaging models is a pattern recommended for use by X (though not mandated). These are classes that can be either hand coded or generated by X.
Q: What else is X doing under the covers?
A: Lets summarize
- Management of the messaging machinery
- Topic subscriptions for publish-subscribe based messaging backbones
- Receipt and dispatch of inbound and outbound messages
- Zero copy, "cut-through" serialization/de-serialization of messages between the in-memory format usable by the application and the raw serialized format transmitted by the messaging provider.
- Transactional message delivery across network, process and machine failures
- Clustering of application instances with full redundancy of the in-memory state graph
- Event Sourcing and State Replication HA models
- Quorum based journaling of application messages and/or state for recovery from full downtime.
- Transparent role election and intra-cluster failover
- Inter-cluster replication of the transaction log
- Live log compaction
- Live log CDC (change data capture) - for example, if you want to siphon the content of the log into an RDBMS
- SQL based querying of transaction log and/or in-memory state
Q: I love this stuff! Where can I get my hands on the product and some sample code?