A vision for MPS and GraalVM-based Interpreters

I summarized my vision how to combine MPS and GraalVM / Truffle to provide high fidelity interpreters.

What do you think of these ideas? Does it make sense? Is it feasible? Useful?
I’d be very interested in your views.

https://www.nikostotz.de/blog/mps-quest-of-the-holy-graalvm-of-interpreters/

1 Like

While you write what you want to do and on high level how, I missing the goal of it.

What do expect to be the benefits over the current interpreter framework?

My very first reaction has been: great!

The second reaction, reading your post has been: this sounds as a lot of work, with a lot of things that could go wrong (example, your reference to the fact that a user could delete a node while it is being executed).

We have been looking into Truffle for an interpreter built outside MPS, and it seems to me that the learning curve is steep. So I am naturally careful with it. Maybe I need to spend more time on it, or finding better resources but at the moment it looks a bit like black magic to me. Having spent more time with it, do you think it is a feeling that goes away at some point?

That said, I would second the question by @dumdidum : what the main benefits you are looking for?
Performance? Interoperability with other languages supported by Truffle? Is this something you started looking into because of your interests or because of some recurring problem you have experienced?

I think it is something with a lot of potential, I would like just to understand better what is the main feature that could convince early adopters to invest in this, as it seems not a small engineering feat to get this right (and therefore I compliment you a lot for your achievements with this!)

I missing the goal of it.
What do expect to be the benefits over the current interpreter framework?

Very good point.

In short:

  1. Very good performance
  2. Tool support good enough it’s easy to use
  3. Stand-alone capabilities so we don’t need a separate generator

In more detail:
ad 1: I think once we get to executable “megamodels”, caching won’t help for interpreter performance.
We will need smart recalculation (aka incremental updates), but also actually fast execution.

The threshold for “megamodel” is pretty low IMHO: As soon as we have any kind of live execution with not very static base data, we need to re-evaluate the better part of the system quite regularly.

A real-world example: I’m currently working with feature models à la IETS3. The solver can easily handle the model (< 1 s), but it still runs every time I change anything in the model. This leads to quite poor editing experience.
Of course this can be improved by less frequent re-evaluation. But even implementing this well is not an easy task. And even if we managed that, the actual evaluation still needs to be fast enough so the user can relate the change and effect intuitively. And even if we dropped this requirement and evaluate asynchronously, getting this right is hard.

ad 2: We will never manage to completely hide the complexity of interpretation, but the current framework gets us quite far. Truffle also helps a lot to make advanced optimizations accessible for non “PhD mult.”; however, I agree with @ftomassetti it still needs considerable investment. So we should bridge the gap between these two.

However, even with the current framework writing interpreters is a lot harder than “regular” coding. Part of this is the inherent complexity, but a big part is inferior tooling. Thus, I think we need good tooling to move interpreter implementation from feasible to easily doable.

ad 3: If we had good enough performance (1) and tool support similar to regular development (2), then why do we need to write a generator to get our logic run outside the modeling system? Truffle already runs on JVM and native, and I have some hopes on a web-based runtime (see Hi I'm Mani - excited to be part of this forum - #3 by Niko). This would save us a huge amount of work.

We have been looking into Truffle […] it looks a bit like black magic to me.
Having spent more time with it, do you think it is a feeling that goes away at some point?

At some point I understood quite a few of the concepts, this helped me a lot.
I found it very difficult not to receive any good error messages (seems to be a limitation of Java Annotation Processors).
In general: Yes, this is advanced technology that needs serious time to understand.

Interoperability with other languages supported by Truffle?

At first glance, this feature set looked like the natural run-time companion to MPS’ language extensibility.
As described in the article, I’m not 100 % sure if this fulfills its promise.

I don’t see the obvious advantage of interacting with Ruby, Python, etc. on their own.
This changes completely if our target system uses one of these languages – then we’d get the integration for free. JavaScript in the browser would be the obvious example.

Is this something you started looking into because of your interests

I got quite interested in this topic when I started to build the interpreter framework. Truffle fascinates and challenges me, which keeps things interesting (-:

or because of some recurring problem you have experienced?

That’s part of it. Even the very simple example we showed all the time (mbeddr requirements language prototype with flight rule calculation with value debugger) is horribly slow if we disabled caching. As soon as we had any kind of loop (math sigma function), it became unusable.

1 Like

What would be interesting about this interpreter for my daily life is that we often have 2 semantics: one in the Domain IDE that the domain experts work with, and a separate one for the deployed runtime. The 1st one is typically interpreted, and the 2nd one generated. It would be interesting if Niko’s work led to an interpreter that’s performant enough to replace the generated semantics. This would mean that we’d have to maintain only 1 interpreter, instead of an interpreter and a generator, while also making sure somehow that those 2 semantics coincide.

2 Likes