BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Newly Announced Ecstasy Programming Language Targets Cloud-Native Computing

Newly Announced Ecstasy Programming Language Targets Cloud-Native Computing

This item in japanese

Bookmarks

Key Takeaways

  • Ecstasy is a general purpose, type-safe, modular programming language built for the cloud
  • The team building Ecstasy plan to use it as the basis for a highly scalable Platform as a Service (PaaS)
  • Ecstasy is still in development and is not yet ready for production use
  • The Ecstasy team are looking for contributors that want to be involved with defining the future of our industry
     

Ecstasy has been co-created by former Tangosol founders Cameron Purdy and Gene Gleyzer, and they recently showcased the language at CloudNative London 2019. Before founding xqiz.it, which is the sponsor of the Ecstasy project, Purdy was the Oracle Senior Vice President for enterprise Java, WebLogic, Coherence, Traffic Director, HTTP, JDBC, and Exalogic products. InfoQ got together with him to ask some questions about the language and the problems it’s designed to solve.

InfoQ: What is the Ecstasy language (XTC)?

Cameron Purdy: Ecstasy is a general purpose, type-safe, modular programming language built for the cloud. We took the last 30 years worth of learning, refined it into an easy-to-comprehend language, and left the cruft behind.

In some ways, it’s the language that I wish I had been able to use to build Tangosol Coherence. It’s the language that I wish I had been able to use to build applications for the web and for back end services. It’s the language that I wish we would have had when we set out to build parts of the Oracle Cloud infrastructure.

It’s a language that focuses on security, composition, and readability, but it’s designed with explicit intent for repositories, continuous integration (CI), devops, continuous deployment (CD), cloud and client portability, edge/CDN/5G integration, and geographically distributed systems.

It’s also not just a new language. The Ecstasy language compiles to a new portable binary format, with a new instruction set, based on a new managed runtime that was designed to support just-in-time (JIT), ahead-of-time (AOT), and adaptive compilation.

InfoQ: The stuff on Refs (if I understand it correctly) would seem to imply that this is definitely NOT a systems programming language?

Purdy: It’s not a systems programming language. What I mean by that is that we did not design this language as a language for writing an operating system, or an operating system driver, or something to manage memory, or something with which to write a word processor. Of course, as a general purpose language, you could conceivably do any of those things, but languages are built for a purpose, and this one is built for the purpose of helping developers compose and evolve applications in the cloud.

Those applications will have pieces that execute within the datacenters of various cloud providers, and within the cloud providers’ CDNs, and within the 5G infrastructure of the telcos, and out to the edges of the various ISPs, and down into the client devices, and even into the browser running on those clients.

Our goal was that someone who knew Java or C# (both of which I have programmed extensively in myself) or Swift would be able to read Ecstasy code with no learning required, and would be able to write Ecstasy code within hours or days of picking it up. While many of the language features will be very well known to C and C++ developers as well, that audience is not our primary target, even though the language is technically part of the C family of languages. Anyone using Kotlin, Python, Erlang, Elixir, Smalltalk, or Ruby should also be able to quickly grok Ecstasy; many of the language ideas and capabilities in Ecstasy have already shown up in, or are in the process of being adopted by these languages.

Like Java and C#, Ecstasy was designed for a managed runtime environment, but thanks to hindsight, our design was able to take much more advantage of the concept of a managed runtime environment. One obvious example is with respect to threading: Ecstasy does not surface the concept of a thread, because doing so would prevent the runtime from managing concurrency dynamically. (There are many other good reasons for avoiding the surfacing of threads in a language, including avoiding all of the pitfalls that come from developers trying to write correct concurrent code.) In Ecstasy, the choice of how to allocate threads across potentially concurrent and potentially asynchronous work is made by the runtime; in a way, we applied the concepts from the Java "Hotspot" adaptive compiler to the questions of concurrency and threading, allowing the runtime to use empirical evidence collected while running the code to subsequently optimize the execution of that same code. Furthermore, Ecstasy was designed for Ahead-Of-Time (AOT) compilation, meaning that the heavy lifting of compilation and optimization can be done before the code is executed, and with adaptive compilation, this also means that the information collected over time and the optimizations refined over time can be retained for future use -- even when the application is stopped.

We also recognized that hardware has changed dramatically since the 1990s, when the JVM and the CLR were invented; back then, high-end $100k servers had two-to-four concurrent hardware threads and maybe a few gigabytes of RAM -- specs that are easily eclipsed by a modern mobile phone. When we sat down to design Ecstasy, we designed it to run well on servers with tens of terabytes of RAM in a single process, and with thousands of concurrent hardware threads in a single process. Very quickly, we eliminated language concepts like "thread" and "synchronized", and added concepts like immutability; we also quite purposefully eliminated underlying limitations like "the heap". By adopting an intrinsic actor model (a la Smalltalk’s messages, and Erlang’s processes), we allowed an architect or a developer to easily subdivide their application into potentially concurrent units, which Ecstasy calls services. The rules for memory, threading, and security are all described in terms of services, and are quite obvious in retrospect.

And the beauty of the language is also made obvious through its type system. Ecstasy is statically typed, with incredible type inference, generic reification, and so on, but it is far more than that: The design of the type system and the design of the runtime are coherent. In other words, we modeled the two of those things in concert, so that the type system makes sense within the definition of the runtime, and the runtime makes sense from looking at the type system. Even properties, variables, and object references are objects, so runtime reflection becomes natural, and the runtime and the type system act as symbiotes. Some of the decisions in this regard were incredibly important, including designing for provable closure over the type system, supporting hierarchical (nestable) runtime containers, and designing for secure, dynamic code loading.

InfoQ: What's the plan for something like crates/npm/dep (lots of lessons there on how NOT to do things)?

Purdy: This was an area of intense focus as we designed the language and the runtime. Part of our thinking was influenced by our experience building libraries and frameworks and applications that were used in a variety of different environments, and with a variety of other libraries and frameworks and servers.

For example, at one point in the distant past, the company where I worked was using either the SAX or DOM library for XML parsing. One day, a completely incompatible API change was introduced, which forced us to either write and build against the old version or the new version, and whichever one you built against precluded linking to the other. Even worse, this library was included in many different app servers, and thus any older version of those app servers would have the older version of the library, and any newer version of those app servers would have the newer version of the library. At that time, we already had users on many different versions of many different app servers, and suddenly it was impossible for us to ship with a dependency on that Apache XML parser! So -- and I’m not joking here -- we had to write our own XML parser (to avoid dependencies), and then we had to support it for years and years!

Another example is that we wanted our product to work well with Spring, so we did some joint development with the Spring developers, but in order to prevent a hard dependency on Spring (which many of our customers did not use at the time), and in order for Spring to avoid any hard dependencies on us, we had to do all of the integration via reflection! So something that should have been as simple as a line of code would instead take 20 lines of code, and that complexity had to get sprinkled all over. Similarly, we added plug-in support for Hibernate and with Solarmetric KODO, but now we had combinatorial complexity with various combinations of these "optional" libraries. Modularity in Java was possible with reflection, but it was a huge pain. At one point, we even tried OSGi to see if it would simplify matters, but the result turned out to be even more complicated.

Many of these same problems kept appearing as we worked with companies building applications, and so we took what we learned from all of these experiences, and set out to design Ecstasy from the ground up to solve these problems. First, we designed the compilation unit to be a module -- or optionally a set of modules, which is sometimes necessary when circular dependencies exist among modules. Next, we baked in support for repositories by allowing modules (and only modules) to declare their own universal identity in the form of a Uniform Resource Identifier (URI); when a dependency exists on a module, that module dependency is expressed as a URI, and we say that the depended-upon module is imported.

It’s slightly off topic, but it is very cool how simple importing a module is and how elegantly it works: The imported module is represented as a package (aka namespace) inside of the module that imports it; it’s as if all of the contents of the imported module are copied into that package and therefore are all accessible by the local names as they would appear inside that package. For example, every module automatically imports the (URI) "Ecstasy.xtclang.org" module as the "Ecstasy" package, so the "collections.HashMap" class from the Ecstasy module is present as the "Ecstasy.collections.HashMap" class in every module.

When a module is built, it gets stamped with a version; typically, that version will be either a development or CI version. The version also contains a version number, supporting the Semantic Versioning 2.0.0 specification, and the version stamp can be updated, so a CI build that does not regress any tests can be stamped as a QC or pre-release build. When the build is ready for roll-out, the pre-release marker can be removed. This is all designed for automation, and designed to be flexible enough to match an organization’s existing processes.

The module design is unique in another way: A single module can contain many different versions of the same module. When two different versions of a module are combined, the module only increases by the size of the differences between the versions. This allows a single module file to contain every single one of its supported versions, plus pre-releases of future versions, plus optional patches to older versions, and so on. The thinking behind this capability is to make it easy to roll forward and back, to A/B test, and to safely deliver optional patches, all within a single deliverable.

And modules dependencies are not necessarily hard dependencies. For example, an application may specify that module "A" is required, module "B" is desired (i.e. link to the module if it can be found), and module "C" is supported (i.e. only link to the module if some other module in the dependency graph caused the module to be linked). Additionally, a module can be embedded, which means that it gets physically included into the module that embeds it; this capability is intended primarily to support shipping a single module that happens to be constructed (for organizational or other reasons) from more than one module.

But the most powerful aspect of this is how these capabilities are combined. Specifically, a module may or may not have another module present at runtime, and if it is present, that module may or may not be of a particular version, or contain a particular class or feature. This is the reality of DLL hell, as we used to call it. Ecstasy provides a way for the presence of classes and code to be conditional on the version or the presence of other classes, and the compiler will compile those conditions into the resulting modules much the same way that it can put multiple versions into the same module; in other words, the module can both support the absence and the presence of another module, or of a particular version, and that resolution is made as part of the dynamic linking process. This allows a module to avoid hard dependencies on a module, or on specific versions of a module, while still fully integrating with and leveraging those modules or versions if they are present.

And because all of this information is encoded into the module, tests for all of these combinations can be completely automated. Speaking of tests: Unit tests, functional tests, and integration tests can also be built into the module, and -- just like a version that is not needed -- the tests are not present when the modules are used in production.

It should be obvious by now that this is something that we thought about a great deal, and put a great deal of effort into designing. But why? Among other things, it’s because of the CVEs and the zero-days and the breaking changes introduced into production. Imagine having a matrix of "what versions have been tested with what versions" -- before you need it. Imagine having your regression and acceptance tests for any given version already built into the module with that version. Imagine being able to roll out a new patch in an A/B deployment, with the patch segregated in a manner that its results can be compared -- live! -- with the version that it’s replacing, to determine in advance what actual changes the end users will see.

The design that supports these capabilities is in place, but the tooling and automation (which relies on those underlying capabilities) has not yet been built, and is still a ways off.

InfoQ: Does the DevOps intent extend into supporting progressive delivery?

Purdy: Exactly, but this isn’t a simple thing to accomplish. One thing that we accepted up front was that applications would almost always have version skew, and progressive delivery is just one example of version skew. Version skew occurs any time that you have two different versions of an application running; it can be skew between older versions of an application client and newer versions of the server back-end, or skew between various server instances on the back end. Regardless, it forces a requirement for protocol interoperability and state compatibility, and that requirement in and of itself is significant. With respect to state, older versions of code have to be able to non-destructively work with newer versions of data, and newer versions of code have to be able to accept (and potentially automatically upgrade) older versions of data.

We accepted, up front, that version skew wasn’t an anomaly, but rather is a pervasive situation that will almost always be present within any substantial application. We had some experience with this topic from our work designing the Portable Object Format (POF) specification years ago, which supported both forward and backward schema compatibility. It turns out that accepting and enabling safe version skew is the basis for many powerful capabilities, including progressive delivery, incremental roll-outs, traditional A/B (and the automation thereof), and so on. In a scale-out environment, to avoid service interruption, servers will be started with new versions of the application while servers with old versions of the application continue to run for some period of time. (It can be even more complicated when something goes wrong in the roll-out of the new version, and the application has to be rolled back to the previous version, so the new version must not change the persistent state of the application in a manner that the old application cannot continue to operate on.) So all of the design work that we have done with object serialization and database interface design has had this requirement in it from the start.

Combined with the powerful module versioning capabilities and resource injection (which is the only way that an application can communicate with the outside world), this allows an autonomous infrastructure to run a new version (with all of its test-mode functionality enabled) alongside an old version, while keeping the new version "in a box" such that its state changes are not made visible and its responses to clients are not actually delivered; this allows the new version to be tried with production workloads and data without risking damage to the production database and without impacting end users. Similarly, an autonomous infrastructure can roll out an upgrade in the same manner, by incrementally starting and warming up servers running the new version, then incrementally bleeding off traffic from the old version to the new; in case of failure, the process can be reversed. Ideally, newer versions of the application will be able to run in production for a period of time before actual production workloads are transferred from the old to the new version -- long enough to prove an absence of obvious regression. Similarly, by transitioning traffic starting with a small subset, while continuing to route most traffic to the old version, it is often possible to limit end user impact in scenarios where problems only become identified once end users are interacting with the application.

The design of the database APIs are also intended to eventually allow stateful systems to operate across hierarchically organized infrastructures. Our goal is to allow portions of the application to operate within the content delivery network (CDN) and edge tiers, and all the way out to the 5G tower. To accomplish this, Ecstasy was designed as a securely embeddable language, and the database APIs are being designed using an actor-based model with an awareness of units-of-work and eventual consistency. Ultimately, the goal is to be able to support those same APIs all the way down to the client level, allowing applications to function even in intermittently-connected and offline modes. These capabilities are still somewhere over the horizon, but we intend to deliver a proof-of-concept showing how the capability will be supported by the time we finalize the initial version of those APIs.

InfoQ: WASM has been popping up in a few places, what made you choose it for XTC?

Purdy: Just to be clear up front, we don’t support WASM yet, but that’s one of the compilation targets that we designed for, in addition to native x86/x64 and ARM. One of the goals with Ecstasy was to support shared components all the way to the glass, and all the way back to the back-endiest of the back-end systems. That means that the programming model has to be portable all the way into the browser, because the browser makes up such a significant portion of the client population. (That’s in addition to being able to run natively on iOS, Android, macOS, and Windows on the client.)

If you’re targeting the browser, then you either have to convert to JavaScript, or compile to WASM. Since I’d rather gnaw my own arm off than generate reams of JavaScript to debug, this was an easy choice. The other thing to keep in mind is that LLVM already supports WASM: WASM is a first-class citizen (code production target) in the LLVM project, and our planned compiler back-end has always been LLVM.

InfoQ: Who’s going to use this, and how?

Purdy: We’re already using it in our own product development work, and we’re already finding it very hard to switch back to one of the older languages after working in Ecstasy. This language is addictive. On the other hand, the language is still in the active development phase, so it’s not yet ready for prime time, i.e. it’s not ready for building applications today. Unless you really like language, compiler, and tool-chain development, it’s not yet time for you to adopt Ecstasy.

As it becomes available for production use over the next 24 months, we do know who will be using it. Developers building applications for the cloud, with applications that have to run on the various clients, with stateful, serverless back-end deployments. Developers who appreciate the value of automation, of serviceability, of manageability, and of a language designed to reduce the dramatic cost of the software lifecycle.

Most of the companies that I have worked with over the past 25 years are spending 95% of their IT budget (and usually even higher!) just to keep old stuff running. It’s somewhere between hard and impossible to fund new development, or even to significantly evolve existing applications. This is our first chance to turn that cost model upside-down in a long, long time. Ecstasy is a generational change in thinking.

The language is free and open source. All of it. The runtime. The toolchain. The class library. Everything. Wide open. We license using the standard Apache open source license. You can fork Ecstasy. You can embed Ecstasy. The Ecstasy trademark is owned by the Ecstasy project (the maintainer organization for the language), but other than the brand, companies and developers can do whatever they want with the language.

InfoQ: How does this not get caught in a pincer between Serverless and Kubernetes?

Purdy: This is the first language designed for serverless. That sounds like marketing nonsense, but it’s not. Every language before this (with a few possible exceptions like Javascript) were built as a layer cake, on top of native class library and implementation, on top of an operating system (with whatever surface area could be exposed within the OS), on top of a physical machine. The problem is that each layer didn’t hide the layer beneath it -- and this fault was purposeful!

Look at how Amazon does serverless today: They actually give you your own server. (Yeah, it’s all done behind the scenes, but that’s how it works.) Why? Because your serverless workloads can do whatever they will to the environment and the OS and the "machine" (a virtual machine, obviously), so they can’t take the risk associated with having more than one account share that machine.

Ecstasy doesn’t expose the machine. Or the OS. Or any of the libraries that can be found in the OS. Rather, it uses inversion of control (IOC): Ecstasy injects resources into an application, instead of allowing that application to go rummaging through the computer to find the resources that it’s after. Need a database? Inject it. Need a file system? Inject it. Block storage? Inject it. HTTP access? Inject it. A web service? Inject it. This is the first time that a language was explicitly designed to hide the entire universe within which the application code is running, and the first time that a language allowed all of these various resources to be fully managed, to the whim of the (Ecstasy) container within which the application is hosted. It’s a language designed to be secure, not secured as an afterthought by layering on some security thingie.

So Ecstasy is serverless, by design. As for Kubernetes, it is an amazing engineering solution to a horribly complex problem that shouldn’t have existed in the first place. I expect that Kubernetes will do very well, because so many projects are tied to the past, and Kubernetes is the path of least resistance to keep those complex turd-balls running.

Look, it’s time for a change. It’s simply too expensive to build an app today -- often requiring developers to pull together dozens or even hundreds of libraries and components. Just look at Node or its equivalent in any language; these things are near-impossible to secure and maintain.

I hope that Ecstasy is that change.

About the Interviewee

Cameron Purdy is an 11x developer, co-creator of the Ecstasy programming language, and co-author of Oracle Coherence. Cameron is co-founder and CEO of xqiz.it. Previously, he was co-founder and CEO of Tangosol, acquired by Oracle, where he was the Senior Vice President for enterprise Java, WebLogic, Coherence, Traffic Director, and Exalogic products. Cameron is a contributor to the Java Language and Virtual Machine specifications, author of the Portable Object Format (POF) specification, and author of a plethora of patents on distributed computing and data management.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Still lots of challenging projects to go!

    by Cameron Purdy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We are looking for feedback, and we're looking for contributors who are interested in learning more about how new languages and runtimes get built from scratch!

    Peace,

    Cameron.

  • After reading article...

    by Bas Groot,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I'm always triggered by news about new architectures and languages. A fully injected runtime with multiple semver modules, upgraded over the web, no direct OS access ever - I built something like that and that concept really is the way to go.

    I'm most curious how version-independent data serialization is going to work out. On a key/value level its doable of course. Provided that some part of this remembers fields that were unused but must not be forgotten when writing, as they could originate from a newer software version. I assume you got this covered.

    Cross-version support of changes in object structure (field->list, list->tree) or vice versa), field format (boolean->enum, int->string, list->indexmap etc) and field meaning. These are real-world problems with serialized data, and I only managed to find partial solutions with heavy dependence on scripting, and in real world production applications, the scripting bit makes transitions nontrivial and even scary.

    I'm sure you guys thought about it, so I'm curious what you have in mind for such transitions.

  • Re: After reading article...

    by Cameron Purdy,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I'm most curious how version-independent data serialization is going to work out. On a key/value level its doable of course. Provided that some part of this remembers fields that were unused but must not be forgotten when writing, as they could originate from a newer software version.


    Yes, indeed. Basically, the schemas (metadata, among other things) for all of the supported versions must be retained, and a bidirectional conversion between each newer version and the version that it updates must also be available.

    Cross-version support of changes in object structure (field->list, list->tree) or vice versa), field format (boolean->enum, int->string, list->indexmap etc) and field meaning. These are real-world problems with serialized data, and I only managed to find partial solutions with heavy dependence on scripting, and in real world production applications, the scripting bit makes transitions nontrivial and even scary.


    Indeed. It's one of those things that, as an after-thought, becomes a nightmare. We hope that, by designing it in as part of the normal flow of development, that it becomes more manageable and predictable.

    The thing is, very few applications exist that are allowed to throw away all previously accumulated state on every update. Almost every single application is expected to bring along its data, intact (i.e. nothing lost) from version to version. Instead of acting surprised at the end of a release cycle when this requirement suddenly appears, we are baking it into the design. Not as simply a common case, but as the normal case.

    Versioning is a problem that needs to be solved up-front.

    Peace,

    Cameron.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT