A vision for MPS and GraalVM-based Interpreters

I missing the goal of it.
What do expect to be the benefits over the current interpreter framework?

Very good point.

In short:

  1. Very good performance
  2. Tool support good enough it’s easy to use
  3. Stand-alone capabilities so we don’t need a separate generator

In more detail:
ad 1: I think once we get to executable “megamodels”, caching won’t help for interpreter performance.
We will need smart recalculation (aka incremental updates), but also actually fast execution.

The threshold for “megamodel” is pretty low IMHO: As soon as we have any kind of live execution with not very static base data, we need to re-evaluate the better part of the system quite regularly.

A real-world example: I’m currently working with feature models à la IETS3. The solver can easily handle the model (< 1 s), but it still runs every time I change anything in the model. This leads to quite poor editing experience.
Of course this can be improved by less frequent re-evaluation. But even implementing this well is not an easy task. And even if we managed that, the actual evaluation still needs to be fast enough so the user can relate the change and effect intuitively. And even if we dropped this requirement and evaluate asynchronously, getting this right is hard.

ad 2: We will never manage to completely hide the complexity of interpretation, but the current framework gets us quite far. Truffle also helps a lot to make advanced optimizations accessible for non “PhD mult.”; however, I agree with @ftomassetti it still needs considerable investment. So we should bridge the gap between these two.

However, even with the current framework writing interpreters is a lot harder than “regular” coding. Part of this is the inherent complexity, but a big part is inferior tooling. Thus, I think we need good tooling to move interpreter implementation from feasible to easily doable.

ad 3: If we had good enough performance (1) and tool support similar to regular development (2), then why do we need to write a generator to get our logic run outside the modeling system? Truffle already runs on JVM and native, and I have some hopes on a web-based runtime (see Hi I'm Mani - excited to be part of this forum - #3 by Niko). This would save us a huge amount of work.