Hi, I'm Lorenzo Addazi

Hi all!

I am Lorenzo Addazi, an italian PhD student from Mälardalen University (Sweden) and a happy new member of this community.

My current research project focuses on Executable Modeling and Domain-Specific Languages for High-Performance Computing. Other research interests include Software Language Engineering, Compilers, Model-Driven Engineering, Hybrid Modelling and Distributed Systems.

I have been contributing to various open-source projects (whenever possible) such as Xtext and Eclipse Collections. More recently, I have been playing with (and planning to contribute to) the MLIR project (part of LLVM) supporting the specification of intermediate representation dialects.

Looking forward to learn
and share ideas with all of you!

/Lorenzo

4 Likes

Welcome, Lorenzo!

Another executable modeling and DSL enthusiast here!

What is your emphasis in high-performance? Finding problems in models? Generating high-performance programs from models? Something else?

Hi Rafael!

I am focusing on integrating support for implicit parallelism in fUML/Alf. Ideally, the objective would be to provide support for the definition of implicitly parallel DSLs. Something similar to what is done in the Delite framework or skeleton-based parallel programming in general.

We are targeting heterogeneous embedded platforms with multiple GPUs, CPUs and FPGAs, hence I am also investigating the various ways how the fUML virtual machine could be adapted for and deployed in these scenarios.

Here is a link to our project description: HERO

What about you?

Hi Lorenzo, not sure I fully understand the second paragraph - do you target those platforms by generating code for them, or by actually deploying the fUML VM in those targets? From the HERO link, it is the former, I guess, right?

My goal has always been in bridging the gap between conceptual modeling of solutions for business problems (which are awfully costly with today’s approaches), and producing functional, usable and obsolescence proof software from those models. Code generation is a dear subject to me, but I am aiming only at sufficiently performant and correct (source) code, readability being just as important as correctness. This article on Cloudfier gives a good overview of how I a have been approaching the problem.

We are working on the second, actually. The executable is a partial evaluation of the input model using the fUML-VM. This is especially convenient in domains where certified compilers are must, multiple DSLs are involved and multiple hardware platforms might be used. Translational approaches require multiple code generators to be defined for each language-target combination, which is not sustainable in such scenarios. Rather than defining the operational semantics of DSLs in terms of code generation rules, fUML itself can be used (probably not using the graphical concrete syntax).

I agree that readability is as important as correctness, these are deeply intertwined. What metrics do you use to measure the readability of your generated code?

At this point, I am not applying any metrics, Lorenzo. But there are two aspects I consider:

  1. Ensuring the domain knowledge is clearly represented in the generated code. The modeling language I employ is optimized for expressing intent and domain knowledge in conceptual solution models, and almost no technical choices have to or can be made here. When mapping the conceptual solution to a concrete one, I try to use the most intent-revealing mechanisms available in each target technology, to preserve the domain knowledge as much as possible within the idiosyncrasies of the target languages and frameworks.

  2. Ensuring the code feels natural to human readers. Here, I focus on writing generators that produce code that reads as if written by another fellow programmer competent in the target language. There are two driving forces here: one, is to work around the negative perception most programmers have about generated code; another, just as important, is to make clear there is lock-in in using the modeling tool - if they change their minds about using models and code generators, the team can at any time abandon the tooling and keep maintaining the generated code by hand.

In both aspects, I evaluate the quality of the code generated mostly subjectively, basically the same way I’d do when evaluating code written by a fellow developer. Luckily, once I am happy with the output produced by the generator for my sample models, I can be confident the result will have a similar quality for user models as well.

On the correctness side, my approach has been the following: since we are talking about executable models, I model both the application and tests for the application, I generate both the application and tests in the target language, and then run the generated tests again against the generated application. The same tests should pass or fail in both cases, the failure reasons should be the same (about business rules), and there should no failures related to implementation issues.