A virtual meetup

Could I get a calendar invitation as well? I have a premium zoom account through my work email. acouch@damianoglobal.com

Guys! It is not Federico’s job to send everyone here a personal email calender invite. Let’s announce things here, and then everybody manages their own calendar!


Agree. Let’s not record. The main point of these Meetups is to actually meet and chat and hangout.


I do not mind, and I hope this becomes a regular appointment, so having a reminder in the calendar could be good. Also, if later we need to change the meeting number we will have a way to inform everyone who get the invite by updating the invite

Thank you to everyone who participated yesterday! It was fun.

A quick reminder to everyone joining the conversation:

  • We are organizing an online meeting, which is free for anyone to join
  • We are meeting on Zoom: you do not need any Zoom subscription to participate. Just click here: https://zoom.us/j/278156178
  • We are meeting every thursday at 7PM CET
  • If you want to get a calendar invite tells me and I will send you one

Yesterday it was 9 of us and we had a great conversation.

If I understood the idea correctly we would like to have a demo each time, just to get the conversation started. Also, we would like to sketch a list of topics in advance. I think @Niko suggested a topic at the end of the last meeting which we could discuss next time. I am not 100% sure I remember correctly so I think it would be nice if he could write it here.

I am looking forward to the next call!


My suggestion related to A proposal for a future tool platform : We already discussed this topic in yesterday’s call; I was wondering how to split this rather huge idea in smaller parts.

I can imagine different levels of “small”:

  • Small enough to have intermediate, usable results
  • Small enough for different companies to pick up a part
  • Small enough to tackle a part in a long weekend’s hackathlon
1 Like

Maybe I should add some rationale on these levels:

Small enough to have intermediate, usable results
This is the minimum we have to do. Even if we would find a huge sponsor with “unlimited” money, they wouldn’t wait for 2+ years until all of the proposed aspects are done. We have to show usable progress.

Small enough for different companies to pick up a part
If we wouldn’t find one big sponsor, but had to distribute the work over several parties with their own focus, we’d need to find a sensible way to cut the work.

Small enough to tackle a part in a long weekend’s hackathlon
That’s the only community-driven possibility I can see. Even without a sponsor, we might be able to come together in small groups for a long weekend to work on one focused area. (The hackathlon approach is not mandatory, but we’d need the same partition size if individuals / small groups would like to take up some of the work.)

1 Like

I think that focusing on devising a “future” tool platform and decomposing the problem in “small” parts is a valuable point, but we risk forgetting the first raison d’etre of today’s tools: they are meta tools for building better tools.

For the future to become the present, we need to start from the present tools we are working on/with and build new tools by using them or evolve them in order to reach the future we dream of.

For me, a modern language workbench is all about providing the DSLs for accelerating the evolution of the platform itself (i.e. the domain level) and for building a new foundation (i.e. the framework level).

For instance, I am working on a language workbench, the Whole Platform, that has been open source for twenty years now and I started developing it in the '84.
It is still very far from what I already had in mind at the time, but it is getting closer and closer to the tool I need to build what I really want.

I recently worked on porting Whole to the swift/ios platform, and I have had many opportunities to wonder how much I could shorten the development time by starting from scratch.
So far, every time I tried to take some shortcuts, I ran into problems that I didn’t remember anymore because they are hidden by the domain level I am used to deal with.

For these reasons I propose to place the idea of ​​"incremental steps" alongside your “small steps”


I strongly agree with both @Niko and @solmi.riccardo

I think the only way we can eat this elephant is in bites. And this is the case on two levels: on the pure implementation level and on the design level. If we try to reimplement everything from scratch we simply cannot afford it, and if we try to come up with a new design we would run in many challenges. It took 15 years or so to make MPS sort of usable. If we try something different I think it would take a similar time to come up with something as good.

I think that, if we implemented the active repository once we have done that we would have a big and important piece of infrastructure and yet we would not be able to show any value to the client, because it is an enabler for other tools that would have yet to be built. So we would then need to build a projection all editor, to work with that active repository. It means a lot of work to be done before being able to show any concrete benefit to the client, but also before being able to get any proper feedback from practical experience with the tool.

Personally I would believe that we can tackle small pieces as a community, without financial backing from a medium/large corporation/benefactor.

The problem is that to have a system that works we need a lot of different pieces;

  • the editor
  • a way yo store the models
  • API to work with the models
  • typesystem

And probably a lot of other things that are more or less useful or mandatory.

Build all of these things is like making a huge jump. How can we split it in smaller jumps, so that we can assess where we are before proceeding? I think that one way is to act as the edera and take advantage of an existing system and that system for me it is MPS.

What I think we could do is to create a web server to be started from within MPS (possibly in headless mode). At that point we can rely on all the services provided by MPS. This server is reasonably easy to build, it would be based on HTTP calls initially. We could then use it to read models staying in MPS and do some visualization, for example. To allow that the server just need to expose entire models as JSON documents. Easy to do. And we can start getting some benefits early, by offering to the user the possibility to visualize models in the browser. As second step we could have read-only actions being executed on the model server side and the results being sent through HTTP calls. For example, one could ask to generate a PDF or an image out of some model. The server could do that using the MPS models API and just throw the result back in the HTTP call. This would provide immediately some more value. Then we could have this server support some editing, perhaps limited. To do that the server should be able to accept changes through HTTP calls or maybe websockets. The server would then execute those changes on the models. At this stage we would need to do some non-trivial work on the web editor, but we should aim to do as little as possible on the client side and do as much as possible on the server, benefiting from MPS APIs.

At that point we could start moving services out of MPS and into pieces of code that calls the MPS servers to read the models and do some elaboration. For example , error-checking or typesystem, or scope calculations. We can do that one piece at the time, while keeping the editor usable.

Finally we can throw away the model-storing part of MPS, rewriting that last piece in our new system. the result of the model-storing plus the services we moved away from MPS would be the active repository or something very close. And the moment we have it we were able to use it because we would had already the web editor.

We could keep using MPS for how long as we want to define the languages, until we really want to move also that part out of MPS, getting a system that is fully independent.

For the time being I have restarted some experiments on this that I did a few months ago. I think there is some non-trivial work to do a projectional editor usable but I think that if we use it to complement MPS initially, instead of replacing it, then many limitations become more acceptable and we can get value out of it long before we have a full-fledged editor, by using it to see read-only projections of models written in MPS, and then perform some-editing on models that have been created in MPS. Over time we can do more and more on the web side and less and less on the MPS side.

Would anyone be interested in this MPS server and a kernel of a projectional web editor that communicates with such server?

I agree very much with re-using MPS. However, I would start the other way around: Using MPS to access a to-be-developed active repository.

Custom persistence in MPS is quite feasable. Getting the server right, including the way to send data forth and back to the client, seems to me the conceptually hardest part. Thus, I’d start with that one.

On the other hand, we already identified two things we could start with, rather independently (-:

1 Like

I think that to start from the active repository implemented as MPS custom persistence one could focus on solving a key problem for users. Perhaps it could be avoiding git, with automatic saving and a different mechanism for branching/versioning (not based on git). By solving a key problem it could be easier to finance it.

I think it is good because we have basically two complaints from user: 1) They want the editor in the web (no installation, no extra menus) 2) They do not want to deal with git.

I agree that we could tackle the two problems from the two directions, if we use MPS as the core piece that keeps everything together

I think there are two fundamental decisions that have to be made. Do we
want to incrementally build on existing technology, which is likely
going to be MPS for the people talking here, or do you want to build
something new as fast as possible?

And second, do we want to start with the client/editor or do we
want to start with the server/repository.

The path that takes MPS as a basis, is already explored by Sascha. And
he has gotten quite far. If you are willing to run an MPS instance on
the server, he has a working (of course not yet perfect) browser based
editor. He also has cloud-based infrastructure that starts up the
necessary number of MPS instances and detects if they crash to restart
them. He also has a database storage, that includes operational
transform real-time collaboration. He can plug this into MPS, replacing
the file based storage. These two things can be used together, or
independently. I think, with a few months of additional work the stuff
can be used for initial projects.

My paper sketched the second approach, that works without MPS, except
maybe to use it for language definition. Regarding
the second decision, client vs. server, my paper emphasizes server.
For two reasons. I think this will more easily lead to a shared
infrastructure into which various parties can plug their own tools,
including MPS, and including ProjectIt. The decision is also a
consequence of itemis’ Convecton project, where we focused on the
editor, and got hung up in browser technology and UX discussions,
and ultimately didn’t deliver much useful stuff.

While I really appreciate Sascha’s work in relation to MPS, I think it
is really important to start something new. Not just because of the
doubts regarding MPS’ future we discussed in our last video
conference, but also because its really hard to make MPS do all
these things that a modern web based environment expects.

None of what I said of course diminishes what Nico and Federico
emphasize, which is that we have to start with something small and grow
it incrementally, delivering value step by step after a reasonable
amount of initial work. I just think that these fundamental questions I
am asking here will guide which these initial work packages are. So we
have to make a decision.


I just realized, that we have started discussing the topic in writing instead of doing this in our next video conference :slight_smile:

Too eager to get started :smiley:

But do we keep discussing here, or in the original thread. Both is fine with me, but let’s decide.
In case we keep discussing here, then let’s make another thread with the details of the virtual meetup, and make that one “sticky”.

1 Like

Agreed. Let’s discuss here and as soon as I am the computer I will create the other sticky post for the meetup

1 Like

It is done: there is now a new topic for the recurring meeting.


1 Like

Do you want to rename and move this topic? Or merge it with the platform proposal one?

I just wanted to post a summary of today’s discussion, and I cannot judge whether here or the platform proposal topic is the right place.

Results of video discussion 2020-03-26 about active repository

The crux of the whole endeavor is the active repository itself, thus we should start with this one.
We think a hackathlon of a few days would be a good start, to get something going.

We would need expertise in the following fields, to make sure we cover the relevant bases (with candidates):

  • Incremental update on scale (IncQuery, @voelter will ask)
  • Conflict resolution for updates (JetBrains OT (@vaclav.pech will ask), Sascha OT)
  • Cloud-based, scalable, platform-independent servers (@pjmolina)
  • Type system (language engineers should know enough to hack a primitive one)
  • Interpreters (@Niko)
  • Web-based editor (@pjmolina, @jos.warmer)

Not necessarily part of the hackathlon, but discussed aspects:

  • We need a good “marketing” use-case to demo the technology.
  • Access control is probably not that hard to build, but influences API design.
  • Editors, learning, integration with existing systems, etc. are important aspects, but probably don’t influence the fundamental design that much.

@All: Please amend / correct as you see fit.


I would add to the lists of relevant bases:

  • Continuous integration connector (I don’t know specific candidates with experience in combining incremental updates with CI)