BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Data Oriented Programming in Java

Data Oriented Programming in Java

Lire ce contenu en français

Bookmarks

Key Takeaways

  • Project Amber has brought a number of new features to Java in recent years. While each of these features are self-contained, they are also designed to work together. Specifically, records, sealed classes, and pattern matching work together to enable easier data-oriented programming in Java.
  • OOP encourages us to model complex entities and processes using objects, which combine state and behavior. OOP is at its best when it is defining and defending boundaries. 
  • Java's strong static typing and class-based modeling can still be tremendously useful for smaller programs, just in different ways.
  • Data-oriented programming encourages us to model data as (immutable) data, and keep the code that embodies the business logic of how we act on that data separately. Records, sealed classes, and pattern matching, make that easier.
  • When we're modeling complex entities, OO techniques have a lot to offer us. But when we're modeling simple services that process plain, ad-hoc data, the techniques of data-oriented programming may offer us a straighter path.
  • The techniques of OOP and data-oriented programming are not at odds; they are different tools for different granularities and situations. We can freely mix and match them as we see fit.

Project Amber has brought a number of new features to Java in recent years -- local variable type inferencetext blocksrecords, sealed classes, pattern matching, and more. While each of these features are self-contained, they are also designed to work together. Specifically, records, sealed classes, and pattern matching work together to enable easier data-oriented programming in Java. In this article, we'll cover what is meant by this term and how it might affect how we program in Java.

Object-oriented programming

The goal of any programming paradigm is to manage complexity. But complexity comes in many forms, and not all paradigms handle all forms of complexity equally well. Most programming paradigms have a one-sentence slogan of the form "Everything is a ..."; for OOP, this is obviously "everything is an object." Functional programming says "everything is a function"; actor-based systems say "everything is an actor", etc. (Of course, these are all overstatements for effect.)

OOP encourages us to model complex entities and processes using objects, which combine state and behavior. OOP encourages encapsulation (object behavior mediates access to object state) and polymorphism (multiple kinds of entities can be interacted with using a common interface or vocabulary), though the mechanisms for accomplishing these goals vary across OO languages. When modeling the world with objects, we are encouraged to think in terms of is-a (a savings account is-a bank account) and has-a (a savings account has-a owner and account number) relationships.

While some developers take pleasure in loudly declaring object-oriented programming to be a failed experiment, the truth is more subtle; like all tools, it is well-suited to some things and less well-suited to others. OOP done badly can be awful, and a lot of people have been exposed to OOP principles taken to ridiculous extremes. (Rants like the kingdom of nouns may be amusing and therapeutic, but they're not really railing against OOP, as much as a cartoon exaggeration of OOP.) But if we understand what OOP is better or worse at, we can use it where it offers more value and use something else where it offers less.

OOP is at its best when it is defining and defending boundaries -- maintenance boundaries, versioning boundaries, encapsulation boundaries, compilation boundaries, compatibility boundaries, security boundaries, etc.
Independently maintained libraries are built, maintained, and evolved separately from the applications that depend on them (and from each other), and if we want to be able to freely move from one version of the library to the next, we need to ensure that boundaries between libraries and their clients are clear, well-defined, and deliberate. Platform libraries may have privileged access to the underlying operating system and hardware, which must be carefully controlled; we need a strong boundary between the platform libraries and the application to preserve system integrity. OO languages provide us with tools for precisely defining, navigating, and defending these boundaries.

Dividing a large program into smaller parts with clear boundaries helps us manage complexity because it enables modular reasoning -- the ability to analyze one part of the program at a time, but still reason about the whole. In a monolithic program, placing sensible internal boundaries helped us build bigger applications that spanned multiple teams. It is no accident that Java thrived in the era of monoliths.

Since then, programs have gotten smaller; rather than building monoliths, we compose larger applications out of many smaller services. Within a small service, there is less need for internal boundaries; sufficiently small services can be maintained by a single team (or even a single developer.) Similarly, within such smaller services, we have less need for modeling long-running stateful processes.

Data-oriented programming

Java's strong static typing and class-based modeling can still be tremendously useful for smaller programs, just in different ways. Where OOP encourages us to use classes to model business entities and processes, smaller codebases with fewer internal boundaries will often get more mileage out of using classes to model data. Our services consume requests that come from the outside world, such as via HTTP requests with untyped JSON/XML/YAML payloads. But only the most trivial of services would want to work directly with data in this form; we'd like to represent numbers as int or long rather than as strings of digits, dates as classes like LocalDateTime, and lists as collections rather than long comma-delimited strings. (And we want to validate that data at the boundary, before we act on it.)

Data-oriented programming encourages us to model data as (immutable) data, and keep the code that embodies the business logic of how we act on that data separately. As this trend towards smaller programs has progressed, Java has acquired new tools to make it easier to model data as data (records), to directly model alternatives (sealed classes), and to flexibly destructure polymorphic data (pattern matching) patterns.

Data-oriented programming encourages us to model data as data. Records, sealed classes, and pattern matching, work together to make that easier.

Programming with data as data doesn't mean giving up static typing. One could do data-oriented programming with only untyped maps and lists (one often does in languages like Javascript), but static typing still has a lot to offer in terms of safety, readability, and maintainability, even when we are only modeling plain data. (Undisciplined data-oriented code is often called "stringly typed", because it uses strings to model things that shouldn't be modeled as strings, such as numbers, dates, and lists.)

Data oriented programming in Java

Records, sealed classes, and pattern matching are designed to work together to support data-oriented programming. Records allow us to simply model data using classes; sealed classes let us model choices; and pattern matching provides us with an easy and type-safe way of acting on polymorphic data. Support for pattern matching has come in several increments; the first added only type-test patterns and only supported them in instanceof; the next supported type-test patterns in switch as well; and most recently, deconstruction patterns for records were added in Java 19. The examples in this article will make use of all of these features.

While records are syntactically concise, their main strength is that they let us cleanly and simply model aggregates. Just as with all data modeling, there are creative decisions to make, and some modelings are better than others. Using the combination of records and sealed classes also makes it easier to make illegal states unrepresentable, further improving safety and maintainability.

Example -- command-line options

As a first example, consider how we might model invocation options in a command line program. Some options take arguments; some do not. Some arguments are arbitrary strings, whereas others are more structured, such as numbers or dates. Processing command line options should reject bad options and malformed arguments early in the execution of the program. A quick-and-dirty approach might be to loop through the command line arguments and for each known option we encounter, squirrel away the presence or absence of the option, and possibly the option's parameter, in variables. This is simple, but now our program is dependent on a set of stringly-typed, effectively global variables. If our program is tiny, this might be OK, but it doesn't scale very well. Not only is this likely to impede maintainability as the program grows, but it makes our program less testable -- we can only test the program as a whole through its command line.

A slightly less quick-and-dirty approach might be to create a single class representing a command line option, and parse the command line into a list of option objects. If we had a cat-like program that copies lines from one or more files to another, can trim files to a certain line count, and can optionally include line numbers, we might model these options using an enum and an Option class:

enum MyOptions { INPUT_FILE, OUTPUT_FILE, MAX_LINES, PRINT_LINE_NUMBERS }
record OptionValue(MyOptions option, String optionValue) { }
static List<OptionValue> parseOptions(String[] args) { ... }

This is an improvement over the previous approach; at least now there is a clean separation between parsing the command line options and consuming them, which means we can test our business logic separately from the command-line shell by feeding it lists of options. But it's still not very good. Some options have no parameter, but we can't see that from looking at the enum of options, and we still model them with an OptionValue object that has an optionValue field. And even for options that do have parameters, they are always stringly typed.

The better way to do this is to model each option directly. Historically, this might have been prohibitively verbose, but fortunately this is no longer the case. We can used a sealed class to represent an Option, and have a record for each kind of option:

sealed interface Option { 
    record InputFile(Path path) implements Option { }
    record OutputFile(Path path) implements Option { }
    record MaxLines(int maxLines) implements Option { }
    record PrintLineNumbers() implements Option { }
}

The Option subclasses are pure data. The option values have nice clean names and types; options that have parameters represent them with the appropriate type; options without parameters do not have useless parameter variables that might be misinterpreted. Further, it is easy to process the options with a pattern matching switch (usually one line of code per kind of option.) And because Option is sealed, the compiler can type-check that a switch handles all the option types. (If we add more option types later, the compiler will remind us which switches need to be extended.)

We've probably all written code like that outlined in the first two versions, even though we know better. Without the ability to cleanly and concisely model the data, doing it "right" is often too much work (or too much code.)

What we've done here is take messy, untyped data from across the invocation boundary (command line arguments) and transformed it into data that is strongly typed, validated, easily acted upon (through pattern matching), and makes many illegal states (such as specifying --input-file but not providing a valid path) unrepresentable. The rest of the program can just use it with confidence.

Algebraic data types

This combination of records and sealed types is an example of what are called algebraic data types (ADTs). Records are a form of "product types", so-called because their state space is the cartesian product of that of their components. Sealed classes are a form of "sum types", so-called because the set of possible values is the sum (union) of the value sets of the alternatives. This simple combination of mechanisms -- aggregation and choice -- is deceptively powerful, and shows up in many programming languages. (Our example here was restricted to one level of hierarchy, but this need not be the case in general; one of the permitted subtypes of a sealed interface could be another sealed interface, allowing modeling of complex structures.)

In Java, algebraic data types can be modeled precisely as sealed hierarchies whose leaves are records. Java's interpretation of algebraic data types have a number of desirable properties. They are nominal -- the types and components have human-readable names. They are immutable, which makes them simpler and safer and can be freely shared without worry of interference. They are easily testable, because they contain nothing but their data (possibly with behavior derived purely from the data). They can easily be serialized to disk or across the wire. And they are expressive -- they can model a broad range of data domains.

Application: complex return types

One of the simplest but most frequently used applications of algebraic data types is complex return types. Since a method can only return a single value, it is often tempting to overload the representation of the return value in questionable or complex ways, such as using null to mean "not found", encoding multiple values into a string, or using an overly abstract type (arrays, List or Map) to cram all the different kinds of information a method could return into a single carrier object. Algebraic data types make it so easy to do the right thing, that these approaches become less tempting.

In Sealed Classes, we gave an example of how this technique that could be used to abstract over both success and failure conditions without using exceptions:

sealed interface AsyncReturn<V> {
    record Success<V>(V result) implements AsyncReturn<V> { }
    record Failure<V>(Throwable cause) implements AsyncReturn<V> { }
    record Timeout<V>() implements AsyncReturn<V> { }
    record Interrupted<V>() implements AsyncReturn<V> { }
}

The advantage of this approach is that the client can handle success and failure uniformly by pattern matching over the result, rather than having to handle success via the return value and the various failure modes via separate catch blocks:

AsyncResult<V> r = future.get();
switch (r) {
    case Success<V>(var result): ...
    case Failure<V>(Throwable cause): ...
    case Timeout<V>(): ...
    case Interrupted<V>(): ...
}

Another benefit of sealed classes is that if you switch over them without a default, the compiler will remind you if you've forgotten a case. (Checked exceptions do this too, but in a more intrusive manner.)

As another example, imagine a service that looks up entities (users, documents, groups, etc) by name, and which distinguishes between "no match found", "exact match found", and "no exact match, but there were close matches." We can all imagine ways to cram this into a single List or array, and while this may make the search API easy to write, it makes it harder to understand, use, or test. Algebraic data types make both sides of this equation easy. We can craft a concise API that says exactly what we mean:

sealed interface MatchResult<T> { 
    record NoMatch<T>() implements MatchResult<T> { }
    record ExactMatch<T>(T entity) implements MatchResult<T> { }
    record FuzzyMatches<T>(Collection<FuzzyMatch<T>> entities) 
        implements MatchResult<T> { }

    record FuzzyMatch<T>(T entity, int distance) { }
}

MatchResult<User> findUser(String userName) { ... }

If we encountered this return hierarchy while browsing the code or the Javadoc, it is immediately obvious what this method might return, and how to handle its result:

Page userSearch(String user) { 
    return switch (findUser(user)) { 
        case NoMatch() -> noMatchPage(user);
        case ExactMatch(var u) -> userPage(u);
        case FuzzyMatches(var ms) -> disambiguationPage(ms.stream()
                                                          .sorted(FuzzyMatch::distance))
                                                          .limit(MAX_MATCHES)
                                                          .toList());
}

While such a clear encoding of the return value is good for the readability of the API and for its ease of use, such encodings are also often easier to write as well, because the code virtually writes itself from the requirements. On the other hand, trying to come up with (and document) "clever" encodings that cram complex results into abstract carriers like arrays or maps takes more work.

Application: Ad-hoc data structures

Algebraic data types are also useful for modeling ad-hoc versions of general purpose data structures. The popular class Optional could be modeled as an algebraic data type:

sealed interface Opt<T> { 
    record Some<T>(T value) implements Opt<T> { }
    record None<T>() implements Opt<T> { }
}

(This is actually how Optional is defined in most functional languages.) Common operations on Opt can be implemented with pattern matching:

static<T, U> Opt<U> map(Opt<T> opt, Function<T, U> mapper) { 
    return switch (opt) { 
        case Some<T>(var v) -> new Some<>(mapper.apply(v));
        case None<T>() -> new None<>();
    }
}

Similarly, a binary tree can be implemented as:

sealed interface Tree<T> { 
    record Nil<T>() implements Tree<T> { }
    record Node<T>(Tree<T> left, T val, Tree<T> right) implements Tree<T> { }
}

and we can implement the usual operations with pattern matching:

static<T> boolean contains(Tree<T> tree, T target) { 
    return switch (tree) { 
        case Nil() -> false;
        case Node(var left, var val, var right) -> 
            target.equals(val) || left.contains(target) || right.contains(target);
    };
}

static<T> void inorder(Tree<T> t, Consumer<T> c) { 
    switch (tree) { 
        case Nil(): break;
        case Node(var left, var val, var right):
            inorder(left, c);
            c.accept(val);
            inorder(right, c);
    };
}

It may seem odd to see this behavior written as static methods, when common behaviors like traversal should "obviously" be implemented as abstract methods on the base interface. And certainly, some methods may well make sense to put into the interface. But the combination of records, sealed classes, and pattern matching offers us alternatives that we didn't have before; we could implement them the old fashioned way (with an abstract method in the base class and concrete methods in each subclass); as default methods in the abstract class implemented in one place with pattern matching; as static methods; or (when recursion is not needed), as ad-hoc traversals inline at the point of use.
Because the data carrier is purpose-built for the situation, we get to choose whether we want the behavior to travel with the data or not. This approach is not at odds with object orientation; it is a useful addition to our toolbox that can be used alongside OO, as the situation demands.

Example: JSON

If you look closely enough at the JSON spec, you'll see that a JSON value is also an ADT:

sealed interface JsonValue { 
    record JsonString(String s) implements JsonValue { }
    record JsonNumber(double d) implements JsonValue { }
    record JsonNull() implements JsonValue { }
    record JsonBoolean(boolean b) implements JsonValue { }
    record JsonArray(List<JsonValue> values) implements JsonValue { }
    record JsonObject(Map<String, JsonValue> pairs) implements JsonValue { }
}

When presented as such, the code to extract the relevant bits of information from a blob of JSON is pretty straightforward; if we want to match the JSON blob { "name":"John", "age":30, "city":"New York" } with pattern matching, this is:

if (j instanceof JsonObject(var pairs)
    && pairs.get("name") instanceof JsonString(String name)
    && pairs.get("age") instanceof JsonNumber(double age)
    && pairs.get("city") instanceof JsonString(String city)) { 
    // use name, age, city
}

When we model data as data, both creating aggregates and taking them apart to extract their contents (or repack them into another form) is straightforward, and because pattern matching fails gracefully when something doesn't match, the code to take apart this JSON blob is relatively free of complex control flow for enforcing structural constraints. (While we might be inclined to use a more industrial-strength JSON library than this toy example, we could actually implement the toy with only a few dozen additional lines of parsing code which follows the lexical rules outlined in the JSON spec and turns them into a JsonValue.)

More complex domains

The domains we've looked at so far have either been "throwaways" (return values used across a call boundary) or modeling general domains like lists and trees. But the same approach is also useful for more complex application-specific domains. If we wanted to model an arithmetic expression, we could do so with:

sealed interface Node { }
sealed interface BinaryNode extends Node { 
    Node left();
    Node right();
}

record AddNode(Node left, Node right) implements BinaryNode { }
record MulNode(Node left, Node right) implements BinaryNode { }
record ExpNode(Node left, int exp) implements Node { }
record NegNode(Node node) implements Node { }
record ConstNode(double val) implements Node { }
record VarNode(String name) implements Node { }

Having the intermediate sealed interface BinaryNode which abstracts over addition and multiplication gives us the choice when matching over a Node; we could handle both addition and multiplication together by matching on BinaryNode, or handle them individually, as the situation requires. The language will still make sure we covered all the cases.

Writing an evaluator for these expressions is trivial. Since we have variables in our expressions, we'll need a store for those, which we pass into the evaluator:

double eval(Node n, Function<String, Double> vars) { 
    return switch (n) { 
        case AddNode(var left, var right) -> eval(left, vars) + eval(right, vars);
        case MulNode(var left, var right) -> eval(left, vars) * eval(right, vars);
        case ExpNode(var node, int exp) -> Math.exp(eval(node, vars), exp);
        case NegNode(var node) -> -eval(node, vars);
        case ConstNode(double val) -> val;
        case VarNode(String name) -> vars.apply(name);
    }
}

The records which define the terminal nodes have reasonable toString implementations, but the output is probably more verbose than we'd like. We can easily write a formatter to produce output that looks more like a mathematical expression:

String format(Node n) { 
    return switch (n) { 
        case AddNode(var left, var right) -> String.format("("%s + %s)", 
                                                           format(left), format(right));
        case MulNode(var left, var right) -> String.format("("%s * %s)", 
                                                           format(left), format(right));
        case ExpNode(var node, int exp) -> String.format("%s^%d", format(node), exp);
        case NegNode(var node) -> String.format("-%s", format(node));
        case ConstNode(double val) -> Double.toString(val);
        case VarNode(String name) -> name;
    }
}

As before, we could express these as static methods, or implement them in the base class as instance methods but with a single implementation, or implement them as ordinary instance methods -- we're free to choose which feels most readable for the domain.

Having defined our domain abstractly, we can easily add other operations on it as well. We can symbolically differentiate with respect to a single variable easily:

Node diff(Node n, String v) { 
    return switch (n) { 
        case AddNode(var left, var right) 
            -> new AddNode(diff(left, v), diff(right, v)); 
        case MulNode(var left, var right) 
            -> new AddNode(new MulNode(left, diff(right, v)), 
                           new MulNode(diff(left, v), right))); 
        case ExpNode(var node, int exp) 
            -> new MulNode(new ConstNode(exp), 
                           new MulNode(new ExpNode(node, exp-1), 
                                       diff(node, v)));
        case NegNode(var node) -> new NegNode(diff(node, var));
        case ConstNode(double val) -> new ConstNode(0);
        case VarNode(String name) -> name.equals(v) ? new ConstNode(1) : new ConstNode(0);
    }
}

Before we had records and pattern matching, the standard approach to writing code like this was the visitor pattern. Pattern matching is clearly more concise than visitors, but it is also more flexible and powerful. Visitors require the domain to be built for visitation, and imposes strict constraints; pattern matching supports much more ad-hoc polymorphism. Crucially, pattern matching composes better; we can use nested patterns to express complex conditions that can be much messier to express using visitors. For example, the above code will yield unnecessarily messy trees when, say, we have a multiplication node where one subnode is a constant. We can use nested patterns to handle these special cases more eagerly:

Node diff(Node n, String v) { 
    return switch (n) { 
        case AddNode(var left, var right) 
            -> new AddNode(diff(left, v), diff(right, v)); 
        // special cases of k*node, or node*k
        case MulNode(var left, ConstNode(double val) k) 
            -> new MulNode(k, diff(left, v));
        case MulNode(ConstNode(double val) k, var right) 
            -> new MulNode(k, diff(right, v));
        case MulNode(var left, var right) 
            -> new AddNode(new MulNode(left, diff(right, v)), 
                           new MulNode(diff(left, v), right))); 
        case ExpNode(var node, int exp) 
            -> new MulNode(new ConstNode(exp), 
                           new MulNode(new ExpNode(node, exp-1), 
                                       diff(node, v)));
        case NegNode(var node) -> new NegNode(diff(node, var));
        case ConstNode(double val) -> new ConstNode(0);
        case VarNode(String name) -> name.equals(v) ? new ConstNode(1) : new ConstNode(0);
    }
}

Doing this with visitors -- especially at multiple levels of nesting -- can quickly become quite messy and error-prone.

It's not either/or

Many of the ideas outlined here may look, at first, to be somewhat "un-Java-like", because most of us have been taught to start by modeling entities and processes as objects. But in reality, our programs often work with relatively simple data, which often comes from the "outside world" where we can't count on it fitting cleanly into the Java type system. (In our JSON example, we modeled numbers as double, but in fact the JSON specification is silent on the range of numeric values; code at the boundary of a system is going to have to make a decision of whether to truncate or reject values that don't fit into the local representation.)

When we're modeling complex entities, or writing rich libraries such as java.util.stream, OO techniques have a lot to offer us. But when we're building simple services that process plain, ad-hoc data, the techniques of data-oriented programming may offer us a straighter path. Similarly, when exchanging complex results across an API boundary (such as our match result example), it is often simpler and clearer to define an ad-hoc data schema using ADTs, than to complect results and behavior in a stateful object (as the Java Matcher API does.)

The techniques of OOP and data-oriented programming are not at odds; they are different tools for different granularities and situations. We can freely mix and match them as we see fit.

Follow the data

Whether modeling a simple return value, or a more complex domain such as JSON or our expression trees, there are some simple principles that usually lead us to simple, reliable data-oriented code.

  • Model the data, the whole data, and nothing but the data. Records should model data. Make each record model one thing, make it clear what each record models, and choose clear names for its components. Where there are choices to be modeled, such as "a tax return is filed either by the taxpayer, or by a legal representative", model these as sealed classes, and model each alternative with a record. Behavior in record classes should be limited to implementing derived quantities from the data itself, such as formatting.

  • Data is immutable. An object that has a mutable int field does not model an integer; it models a time-varying relationship between a specific object identity and an integer. If we want to model data, we should not have to worry about our data changing out from under us. Records give us some help here, as they are shallowly immutable, but it still requires some discipline to avoid letting mutability inject itself into our data models.

  • Validate at the boundary. Before injecting data into our system, we should ensure that it is valid. This might be done in the record constructor (if the validation applies universally to all instances), or by the code at the boundary that has received the data from another source.

  • Make illegal states unrepresentable. Records and sealed types make it easy to model our domains in such a way that erroneous states simply cannot be represented. This is much better than having to check for validity all the time! Just as immutability eliminates many common sources of errors in programs, so does avoiding modeling techniques that allow us to model invalid data.

A hidden benefit of this approach is testability. Not only is it easy to test code when its inputs and outputs are simple, well-defined data, but it opens the door to easier generative testing, which is often far more effective at finding bugs than hand-crafting individual test cases.

The combination of records, sealed types, and pattern matching makes it easy to follow these principles, yielding more concise, readable, and more reliable programs. While programming with data as data may be a little unfamiliar given Java's OO underpinnings, these techniques are well worth adding to our toolbox.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • What about computation?

    by John Smith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    That's perfect, wonderful features for enterprise programming. What I see as a big missing feature is decimal type. We have 2 options; floating arithmetic(double-float) and its .000000123bla bla fraction waste. Last option is BigDecimal class with its clumsy API usage. Project Valhalla seems providing operator overloading and may be thought to solve this problem with that way but IMHO that would cause new problems and worsen code readibiliy. "Primitive decimal type" seems a better option but leaving that issue to the lang experts and hoping a solid feature for this common problem.

    Here a nice article summarizing issue:
    blogs.oracle.com/javamagazine/post/is-it-time-f...

  • Re: What about computation?

    by Brian Goetz,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Decimal types (and numerics in general other than the built-in primitives) are indeed an ideal application for the value classes that Project Valhalla will bring. (But, this seems largely orthogonal to the topic of the article.)

  • Java and Prolog

    by Andrea Del Bene,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I remember nearly 20 years ago when I was attending a university course about computability and programming languages . Our professor showed us how easily we could write an expression evaluator with Prolog pattern matching support (compared to imperative Java code).
    Now it seems we don't need to switch to Prolog anymore :-).

    Great job!

  • Re: What about computation?

    by John Smith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Your detailed answer about operator overloading is here, conflicting pros and cons. Sometimes solving a problem may cause more problems, maybe it is better to live with it as before:

    www.youtube.com/watch?v=jDU-JALUDd0&t=23m45s

  • ........

    by Alfred Cedak,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    .......

  • How about data-oriented machines?

    by Kim Briggs,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This highlights an intrinsic problem inherent in algorithmic languages like Java. They are all designed to direct an instruction pointer to manipulate data in RAM. They cannot intrinsically describe contexts that are external and asynchronous to the CPU clock. For that you need to build supporting frameworks like Dewdrop, OpenWhisk, Occurent, etc.

    Me, I'd like to see intrinsic support for asynchronous processing built into Java etc, in the form of stackable runtime finite state transducers compiled from nested multidimensional regular expressions formed from terms like (a, b[c] d) where:

    a denotes an enumerator representing an asynchronous something happened,

    b denotes a unary Java method that can accept a parameter c, and

    d denotes a nullary Java method.

    Meaning that on receiving a invoke b[c] d and update FST state.

    These expressions as a whole present a clear, concise, structured and comprehensive view of asynchronous processing systems. They are as potent and useful in event-driven workflows as they are in high-volume pattern matching and transduction contexts (beats the pants off Java regex, see charts in github.com/jrte/ribose).

    The nested transduction model can be applied in myriad asynchronous sequential processes because FST designers work in the algebraic (semi-ring) regular expression space and are free to select the methods triggered by each input symbol, be it a byte from a UTF-8 byte stream or an enumerator signalling an asynchronous event. Runtime FSTs are represented as integer vectors and matrices expressed in terms of enumerators of the three dimensions of the RE space -- input, effector, parameter. This would fit nicely into the JVM runtime, I think.

    See github.com/ntozubod/ginr for an industrial strength multidimensional RE compiler.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT