Based on my experiences the scalability approach depends on the customer situation and is often not a binary selection (among the contexts presented in the article: Language, Generator, Model and Generated artifacts). For example, the team may include domain experts who just create the model but never run generators or own the produced artifacts as other members in the project model in parallel other aspects and then finally generate the code/config/test/etc. that they own too.
In case technology has some influence here, my experiences are from MetaEdit+ in which both language engineering team and language users can work collaboratively in real-time (see video: Working together: MetaEdit+ multi-user for collaborative modeling - YouTube). As a result, language users can ask modifications from language engineering team and see changes made into the language then immediately influencing their models.
Also, having a single team that is responsible for all parts (”lake” called in the article) is something we don’t often see as the key benefit of raising the level of abstraction and hiding details does not really happen here – and obviously no scalability either as there is just one team.
What we have seen is that language engineering team is not large, and the people actually implementing all this is just 1-2 engineers – e.g. as in 10 cases from companies published at ACM Models, paper here: https://metacase.com/papers/effort-create-domain-cameraReady.pdf. The largest language engineering team I know was about 20 as the domain had multiple parts, but the actual implementation of all was done by 2-3 persons. Later once more stable the language engineering team was fundamentally smaller but the number of language users continue to be hundreds or thousands.