POC for Low code platforms and DSL integration

Hey everyone !

During the presentation of Generative Objects (Virtual Meetup 30/4: Walter Almeida presents a low code platform), @voelter evoked the fact that low code platforms are generally limited to create data-driven / management applications, and lack the expressiveness and versality of domain specific languages.

However both DSLs and low code platforms are valuable and powerful in their own unique ways !

The question is : how to leverage both ? @voelter proposed that we build a POC demonstrating how we could do this, and integrate the power of DSLs inside the GO platform.

So this post is a call to specify a useful use case that we can then prototype. Can you help me define this use case ?

Don’t get me wrong, but I think it’s not a matter of one use case; what separates a typical low-code platform from a language workbench are entire classes of DSLs and even of meta-levels.

So, I would like to start by trying to understand what the actual boundaries of the GO platform are and therefore, depending on what do you want to achieve, we can propose an appropriate use case or collaborate on something.

  • I have seen the definition of the entities of a demo app but there is a first class representation of a language metamodel? And if so, can you define multiple of them, use them together, evolve them separately, and have multiple versions of them together?

  • I have not seen any behavioral language (for instance the ones you applied for generating). It is just because you have not a web notation for them? In that case, at the implementation level they share with entities a framework enforced architecture or not?

  • In a separate post you said that for a design choice your code generators are model to text. This cuts the class of meta* generators you can write to just one level, is it good enough for your platform or do you want to change it?

Thank you for the presentation of today, and for your decision of open sourcing your work.


It seems to me that @walter.almeida is not directly suggesting to improve low-code platforms (of which GO can be considered an example) by incorporating all kinds of language workbench-functionality in them. (Or are you?)

I worked for a while at Mendix, so I think I can share some Mendix-centric insights here.

It’s absolutely true that Mendix is squarely aiming at boring data-driven business applications. In fact, internally I often heard things along the lines of “if our (prospective) customers just replaced all Excel/Access-‘applications’, we’re already settled”.

It’s true that the Mendix modelling language itself lacks facilities for abstraction, or meta-programming: a degree of reuse through named callables, such as microflows, is all the abstraction you’re going to see. This is partly by design, and partly through sheer ignorance: Mendix did not employ any language engineers before @jos.warmer and I came to work there.

However, we (@jos.warmer, others, and myself) have created the Mendix Model SDK, and server infrastructure backing that, so you can programmatically interact with a Mendix model. In particular: you can generate models. In other words: you can see Mendix as a generation/transformation target. I think this is a valid way to bring some versatility to low-code platforms: by making a DSL, and generate it to a low-code platform.

1 Like

Thank you for your input @solmi.riccardo,

Actually I see two directions we can take.

The first one is not about to make the low-code platform a workbench, but to integrate DSLs, language workbenches and low-code platforms, in two ways :

  • The low code platform could allow to design DSLs as part of the platform, to allow for the creation of domain specific applications that go beyond pure data-driven applications. The low-code platform provides for creating the boring data-centric part of the application, the APIs and the UI, and the DSLs are used internally to the platform to extend in order to build the domain specific parts of the application. And since it is done inside the low-code platform, the DSLs have access to the application metamodel, to build on top of it. The low code platform provides the data part, the DSL provides the intelligent part. And the code generator actually puts all together and generates the application. @voelter was proposing to integrate a simple text DSL inside GO for a first prototype. Maybe we can have a POC use case in this light.
  • DSLs and Language workbenches can connect to a data application modeled and generated by GO, through API calls. This is a simpler loose integration, not as powerful, IMHO, as the first one.

The second direction I have in mind, is to actually evolve the GO low-code platform to kind of a Language Workbench for low-code platforms. A generic low-code platform used to model and generate domain specific low code platforms!

The GO platform is self-reflected. To actually make an evolution to the GO platform, I open the GO Platform model inside GO and evolve the metamodel, and regenerate the GO platform. It is the chicken and egg challenge, but it is resolved and working now. It is actually a great power.

The current GO metamodel, and associated code generation templates, are designed for data-centric applications.

From GO, we can (with the exist GO metamodel) model a full new domain specific metamodel, that could be used to model any domain, any kind of application, beyond data-driven application. Once this metamodel is created, a new low-code platform (with UI) can be generated, and it will become a domain specific low-code platform targeting this new domain. Obviously, what will be needed is also to write all the code generation templates to actually transform the model to a running application. Creation of generation templates, linked to the metamodel, should then also be supported by GO.

Therefore GO can become what we could call a “Language Workbench for low-code platforms” ??

So for your question “I have seen the definition of the entities of a demo app but there is a first class representation of a language metamodel? And if so, can you define multiple of them, use them together, evolve them separately, and have multiple versions of them together?”

The answer is yes. At least : yes it can be made possible to define multiple of them, use them together, evolve them separately, and have multiple versions of them together. We never used the GO platform this way, however it is designed in its DNA for it, so we can make it happen.

So for your question :

The answer is yes. At least : yes it can be made possible to define multiple of them, use them together, evolve them separately, and have multiple versions of them together. We never used the GO platform this way, however it is designed in its DNA for it, so we can make it happen.

For generating, there is a textual language, based on XML, that is describing a hierarchical tree of elements, with associated generation template, which themselves link to metamodel elements. This is today internal and not visible on the GO platform. So to make what I share before possible, one step will be to actually make this visible so that an end user of GO can modify the generation pipeline or create full new ones.

I am not sure I fully understand the question, as I am not a deep expert of DSLs, but for me : the textual generation artefact of one generation template could be a model to model transformation, and we can generate the textual representation of another metamodel, transformed from the input from the initial metamodel. We can also imagine make the build pipeline extensible and integrate other actions, that are not necessarily code generation. Does this makes sense? otherwise can you be more specific / give an example of what you mean, as I feel it could be a very important / interesting point.

Thank you @meinte.boersma for your input

See my answer to @solmi.riccardo, yes it is the plan to evolve GO in this direction, but not as first step. First step is to rewrite the code generator and open source GO. Then we can imagine all kind of great evolutions !

Yes this is a business decision. It is completely different to build a product to solve a specific business need (replacing Excel), and build a full fledged workbench technology. The target is not the same.

Yes I can see this in many existing low-code platform. We went another route, I was clearly dedicated to build GO as a highly engineered solution, seeing a longer term vision than just building data driven apps. We spent 12 years of R&D on this exact goal, with up to 4 R&D guys working on it, while the rest of the team was using the platform on the field, with customers, to have direct feedback

This is a great thing you brought to mendix! You can too programmatically interact with a GO model, and create full models this way. So a DSL can be built on top of this interface. If you are interested in participating to the GO project, and leverage the experience you had at Mendix, you are more than welcome :slight_smile:

1 Like

I agree that your contribution has made their platform much more powerful.
I would be curious to know if the added value was understood by them and if the Model SDK is also used internally by them or if there are customers already using it.

There were 2 reasons for creating the Model SDK:

  1. It’s used internally by the Web Modeler, which has been renamed to the “Mendix Studio” since a while. This also explains why the Model SDK is a JavaScript thing, rather than a Java thing which might be more logical for the type of customer that Mendix typically serves.
  2. It allows marketing to say that there’s no vendor lock-in when using Mendix, since you can always get your models out, in a processable form.

The Web modeler doesn’t seem to be as important anymore as it was originally purported to be. That’s probably because the desktop modeler (or “Mendix Studio Pro” these days) is more widely used, because being a Windows application doesn’t hinder the people working for Mendix’ customers so much, and the Web modeler doesn’t provide all of the functionality of the desktop modeler.

I’m not really aware of customers really using the Model SDK, certainly not for generative purposes. I know that it’s used to implement custom model validation for/at a small number of customers.

1 Like

For what I understood from your explanation I would say that the answers are no, no and, no!

Let me rephrase your explanation so that you can better tell me what I didn’t understand.

  1. The GO platform has just one metamodel and one code generator with templates for targeting data-centric applications (i.e. the GO platform framework)

  2. The code generator takes exactly one instance of the metamodel and produces a new GO platform product tailored to the domain of the metamodel instance.

  3. When applied to a modified GO platform model instance you get a new version of the GO platform.

  4. When applied to a new DSL defined as a GO platform metamodel instance you get a DSL specific GO platform (assuming the templates are also reasonable for that DSL otherwise you need to provide new ones)

  5. In order to accomodate multiple DSLs together you need to merge all of their entities into a single GO platform model instance. It is up to you to avoid accidental entity name collisions.

  6. You can have multiple versions of a DSL just in the form of multiple generated GO platforms.

Ok I did not get the question fully right, thank you for clarifying it. Here my comments :

Yes, at the moment, because it is used so far for creating data centric applications.


Yes, this is how we evolve the GO platform. Or the we would to to create a domain specific GO platform.

Yes, and yes most probably you need to provide specific, new generation templates.

Yes this is the case today, there is no integrated ways to combine DSLs. However we already were working on a way to compose multiple GO metamodels (let’s call them DSLs) into a single aggregate DSL, but we did not have time to go further and complete the work. However it is feasible, and yes indeed there is name collision to tackle, which can be solve, for example, by namespacing the DSLs. This is a definite evolution of the GO platform that I have in mind and makes sense to me.

So the idea is to creates several DSLs, as several GO platform projects.
You then could create a new GO project and “import” the previously created DSLs.

The import can be done in two different ways:

  • simpler one is that the import just copies the sub DSL inside the new project. Simpler, but if you evolve the original DSLs, you lose it in the projects already using them…
  • optimal way is to link to the sub-DSLs, and this is possible in GO because all metamodels are exposed though APIs, and we can create a specific GO Metamodel dataprovider that connects to these sub-DSLs.

With the link strategy, more challenges come like:

  • sub-DSLs need to be versioned, because you cannot afford to affect all projects using the sub-DSLs on every evolution of the sub-DSLs.
  • still indeed the name collision to be tackled. Can be partly done using “Business Groups” which are in GO a way to namespace entities. However there is still something to adapt in current templates to make it fully work, not such a big deal.

Versionning DSLs brings also the complexity to be able to version, potentially, branch, compare branches etc. of models the same way we do with source code and git. This is also feasible as we already have a change tracking mechanism included in the existing GO metamodel, to be used as the foundation of this versionning.

I want to clarify that this is “to be done” work, but it is definitively possible, as we already anticipated this and it was taken into account in our original design.

Well see above : versionning partly done through change tracking, but need to be further improve to have truly versionable DSLs.

I think this is a valid way to bring some versatility to low-code platforms: by making a DSL, and generate it to a low-code platform.

This is, indeed, the route that I’ve started exploring with Portofino (another kind of “low code” tool, or actually, the evolution of such a tool into an “actually, more code” tool :smiley:).
Its model is a graph persisted as XML and representing relational database schemas, essentially.
I’m working in my spare time to generalise it and use a DSL to read and write it.
With a sufficiently powerful DSL and underlying model, it will become possible first to represent other kinds of data that is not relational, and then perhaps to work with models that are not data- or document-driven.

Sorry, I definitely wrote cryptically. And yes, you are right, a textual target does not prevent you from concatenating model transformations as long as your transformation language is model to model.
The point is that if you use a textual target (real text or instances of a Text DSL is the same) to encode a target DSL instance, you lose all the domain information and your ability of concatenating additional transformations is severely limited.

For instance, if you are generating Java code as text, you are able to expand a template by filling its slots with the expansion of nested templates and so on. But you are not able to perform a context dependent weaving, that is to perform a semantics aware change in the actual fragment surrounding the expansion point, because it is just text for you.

For a low code platform targeting data-centric applications this could be a minor limit, but if you plan to generalize to include behavioral languages, this will end up being an important limit, in my experience.

1 Like

Thank your @solmi.riccardo for the clarification. I must say it still is not very clear to me, I am not for instance familiar with “context dependent weaving”. I guess with a concrete example / use case I could project myself and understand the relevance of it

Given a source model fragment and a target template with a variability point, you may simply want to generate the target template with the variability point replaced by the source model fragment.
You have a context dependent weaving if the source fragment, the target model or both are expected to change in the result based on the application of a weaver function.

I often use the ability to rework the models I am deriving.
I try to isolate some examples by minimizing their context, I hope they remain understandable.

  1. If you are generating a metamodel from a stream of model constraints, you cannot simply append a new entity definition; whether you want to merge on the fly the partial definitions of an entity or merge all partial definitions in a next generation phase, your target must be a model, not a text.

  2. In a derived model you may want to perform some refactoring and/or optimizations to improve its quality. For example, in a metamodel you might want to perform an inlining/embedding of some entities according to the structure of the types and of the associations.

  3. If a source fragment contains variables, types, or blocks, you may want to: import the necessary types if necessary; rename local variables to avoid shadows or collisions in the target scope; inline or wrap your expression/statements depending on the target container type/precedence level.

  4. If one of your target languages evolve over time, you may need to migrate your templates and, at the text level, this may prove impracticable.

All of the above examples are taken from the work I did to add the Swift language to the Whole Platform in the form of a reverse engineering tool that, starting from a grammar-like data model extracted from the official Swift repository, derives the following artifacts:

  • SwiftSyntax - a concrete syntax metamodel of Swift
  • Swift - an abstract syntax metamodel of Swift
  • SwiftNotations - a domain level definition of a Swing concrete notation
  • SwiftMigrations - Two domain level M2M transformations to transform to/from AST and parse tree
  • Two swift level transformations to try transform to/from official parse tree and JSON
  • … and a few others

Ok, thank you @solmi.riccardo for your explanation. I don’t know if it would make sense at the moment in the low code platform GO. I would need to see a concrete case where this would be needed, or realize that this technic would solve one of the challenges I am facing with GO.

@solmi.riccardo may I ask you a question ?

What is your interest in the GO platform and your intention with all these questions ? So that I can understand where you are coming from and what benefit there could be to continue sharing, and how we can do this in an efficient way ?

Thank you

You had shown interest in evolving the GO platform from a low code tool to something more general, so I tried to understand what you have done and the directions you are taking into account.

I tried to find some missing features to bridge the gap with LWB tools, and to discuss their relevance with you, but also to show you that they are expansive to add.

If you decide to implement them I could only cheer for you, since I’m the first of the old school of programmers always ready to reinvent the wheel :slight_smile:

On the other hand I got the idea that the most interesting part of the GO platform is the engineering of the frontend generation (starting from the UI metamodel plans shared by @tvillaren).

So I would suggest taking into consideration @meinte.boersma proposal to delimit the GO low-code platform with a programmatic/modeled interface and to decouple the rest, perhaps implementing it with an existing LWB.

On the UI front, I think that we can share more, since I too am going towards a multilevel architecture to support desktop and mobile UI frameworks at the domain level.

It would be interesting for me to understand if (or help to make) your web frontend is suitable for the notation of behavioral languages.


Thank you @solmi.riccardo for your answer, and sorry for late answer !

Yes I get it, thank you for your inputs. And the idea is not to create a new LWB. To start, I am not really familiar with LWBs and therefore I can’t really say that I want to build one !

My objective is to have an open source low-code platform that is evolutive, versatile, and give services to a maximum people, especially people with impactful projects that can create a new reality and a new way of seeing the world and business. I am especially interested in decentralizing the internet and build services and platforms that can help go in this direction. See for instance my post here : https://forum.solidproject.org/t/new-decentralized-social-network-specialized-in-sharing-public-and-creative-common-content/3027

That said, what I envision for the future or GO is not so much a fully fledged LWB but rather a generic low-code platform that can be used to model and generate specific low-code platforms for specific domain needs, as discussed before, but this is not even the priority at the moment. First is to push GO as it is now.

Yes, got it ! We can keep in touch, and your help is most welcome. I am quite busy planning the open sourcing of GO, but going back to this new versatile front end that @tvillaren designed is definitively on the roadmap. And yes we will decouple as much as possible all the bits, including this front-end part.

Yesteday I found this link about Low-Code Platforms. It is an answer in Quora:


that talks about Code2. Here are the links for Code2

Github — icodebetter/icodebetter

Docs — Getting started

Demo Site — Code2

1 Like

For those who are interested, I did an online presentation and went deeper in the code of Generative Objects. Here is the link : https://www.youtube.com/watch?v=K7iM9Z6TGG4

I tried the password and it didn’t work.

Does this link to the recording include the proposal under discussion? I think I’m out of the loop.

Is the proposal of @voelter to extend the GO platform to include DSL Workbench? Would this turn the GO platform into an online language workbench?