Triple Graph Grammar

from Wikipedia, the free encyclopedia

As triple graph grammar (. English triple graph grammar , in short: TGG ) refers to a special type of graph grammar , mainly for bidirectional model transformations model to be used. The specialty of triple graph grammars is that their production rules consist of three sub-graphs, two of which represent the two models / graphs involved (source and target graph). The third subgraph, the so-called correspondence graph, connects related graph parts from the source and target graph and is thus, so to speak, “between” the two other graphs.

Principles and Semantics

Fig. 1: Classic notation of a TGG rule

A TGG consists of several TGG rules, the graphs of which are typified and attributed . Each of these production rules describes a possibility of how two graphs and the associated correspondence graph can be changed simultaneously. These rules are therefore graph replacement rules, the left side of which specifies a precondition (pattern graph) and the right side of which specifies a possible replacement (replacement graph) for it. In addition, a so-called axiom describes the starting point of the three graphs from which the various rules are applied; an axiom thus corresponds to the start symbol in classical grammars.

The individual production rules of a TGG are monotonous. This means that all nodes of the pattern graph ("left side") are also available in the replacement graph ("right side"). (“Left” and “right” here do not refer to the position within the graphical representation of the rule, but are the usual designation in graph replacement systems.) This means that TGGs cannot be used to define delete operations, although this is in relation to the usual field of application is not a real disadvantage. In addition, monotonous rules are less complex (also with regard to the necessary graph parser).

The graph elements of the sample graph (left side) are also referred to as the context graph . New graph elements to be created, i.e. elements that are only on the right and not the left side of the rule, are also referred to as produced graph .

Each TGG thus defines a language of graph triples. This language contains all graph triples that can be generated from the axiom using the given TGG rules.

Such TGG can also be used starting from an existing instance graph source S the missing target graph T to produce. To this end, an attempt is made to find a sequence of production rules that generates the instance source graph S on the source side. If the correspondence and target graphs are modified for this purpose as defined in the TGG rules, the result is the graph triple that would have resulted if all three graphs had been recreated simultaneously using the rules.

Application to models

The TGG concept can be transferred from graphs to MOF models. TGG rules then describe how specific instance models can be modified. First, the left side of the TGG rule is searched for in the instance models using pattern matching . If the left side can be found (i.e. the rule is applicable), the right side of the rule describes how the rule is applied, i.e. how the instance models can be changed at this point of occurrence.

Models contain objects and links that are instances of classes and references of a metamodel. The nodes of a TGG rule are therefore typed via the classes, the edges via the references of the metamodel. With pattern matching, a node can only be mapped to objects whose classes correspond to its type. In the same way, an edge of a TGG rule can only be mapped to links whose references correspond to their type.

The three columns of a TGG rule (source, correspondence and target graph) are also called domains . A domain typically has an associated metamodel. The TGG nodes and edges in this domain are then typed using this metamodel.

Graphic syntax

The rules of a triple graph grammar are graphs and are classically represented in a kind of two-part object diagram, with one part each for the left and right side of the rule (see Fig. 1). The graphical representation can enable more intuitive access than a textual representation.

Fig. 2: Compact notation of the TGG rule from Fig. 1

Instead of the classic form of representation (with separate left and right sides), one often finds a compact notation: the entire rule is shown in just one graph (see Fig. 2). In this compact representation, the elements of the sample graph (context) are mostly black and those of the produced graph (i.e. the elements that are only on the right side of the rule) are marked in green and with certain markings (such as “<<create>>” or “+ + ").

Application scenarios

A triple graph grammar basically describes how two graphs can be modified in parallel in order to remain consistent with one another (according to the TGG). By applying the rules, models that are consistent with one another can be generated. In practice, however, this is rarely used; instead, TGGs are interpreted for different application scenarios.

Forward and backward transformation ("batch transformation")

Fig. 3: Semantics of the various TGG rule parts in a forward transformation

In the forward transformation application scenario, an existing source model is to be transformed into the new target model to be generated. The basic idea is to find a derivation sequence according to the TGG rule set that would generate the existing source model in the source domain. The associated correspondence and target model can then be generated according to this sequence of rules. Finding this derivation sequence, known as graph parsing, can be very time-consuming, however: In particular, if there are several rules that fit in the same place in the source model, all of these options may have to be tried out.

For the application of a forward transformation, translation rules are usually derived from the basic rules or the basic rules are interpreted differently: Since the source model already exists, the productions of the source domain may not be carried out. The nodes and edges in the produced graph of the source domain are instead mapped onto existing source model elements. Model elements covered in this way are also referred to as bound . It must be ensured that each element of the source model is bound exactly once, i.e. is covered by a produced node or a produced edge. Context nodes and edges, on the other hand, may only be mapped to model elements that are already bound. Figure 3 shows these guidelines for a forward transformation. This ensures that the result of the transformation is only model triples that could have been created by creating all three models simultaneously.

A transformation is only complete when all elements of the source graph are bound, i.e. have been processed by the transformation. So if elements of the source graph are still unbound, but no rule can be used, the transformation is incomplete. If a different rule had previously been applicable at one point, you would now have to go back to this point and use the other rule ( backtracking ).

The direction of the transformation can be reversed by simply exchanging the meaning of the source and target models. Forward and backward transformations can therefore be derived from a TGG. There is no need to manually specify two different transformations for the two directions.

example

An example of a model transformation in the context of software development is object-relational mapping , in which objects are stored persistently in a relational database . Corresponding tables are created in the database for the classes whose objects are to be persisted.

One possible mapping of classes to tables is to create one table for each inheritance hierarchy. A base class is translated into a table, and all classes derived from it also use this table of the base class. The TGG rule shown in Fig. 2 comes from a rule set that defines such a mapping. This rule describes that each base class in the classes domain corresponds to a table in the database domain .

Fig. 4: Forward transformation of classes into a database schema (animated)

Fig. 4 shows the entire TGG rule set for this figure on the left. At the top is the axiom that a package assigns a database schema. The next rule assigns a table to a base class. The negative application condition prevents this rule from being applied to derived classes as well. The third rule then assigns derived classes in the table to their base class. This also works for multilevel inheritance, since the context of the rule can also be formed from the produced part of the same rule. The last rule finally translates attributes of a class into columns in the corresponding table of the class.

On the right you can see how the target model is created from the source model in a forward transformation step by step by applying the various TGG rules. In the initial situation there is already the root elements of the three models (the objects :Package, :PackageToSchema, :Schema) bound by the axiom. An attempt is now made to apply the various TGG rules. Each TGG rule is processed in three steps:

1. First, an attempt is made to find parts in the source, target and correspondence model that match the context graph of the rule using pattern matching.
2. If a part matching the context graph was found in step 1, then a part matching the produced source graph is then searched for in the source model.

If a suitable part is found in the source model, the corresponding elements are marked as bound and the process continues with step 3. If no part can be found, the process goes back to step 1 and another pattern matching is sought for the context graph.

3. If a matching was found for the context part and the produced source part of the rule in step 2, the rule can be used.

In doing so, model elements are generated for the correspondence and target parts produced and at the same time marked as bound. Finally, any necessary attribute value assignments are carried out.

Fig. 5: Pattern matching process and application of a rule (animated)

Fig. 5 shows an example of how pattern matching and the application of a TGG rule for translating attributes work.

Synchronization ("incremental transformation")

With the help of triple graph grammars, not only one-time model transformations (batch transformations) can be carried out. Associated model parts are marked by the correspondence graph, so that subsequent changes to one of the models can also be transferred to the other model without having to completely regenerate them (“incremental transformation”).

If, for example, a new model element is added, it only needs to be checked whether further rules can be applied through this insertion in order to then execute them. In addition to an improved runtime, this has the advantage that model-specific information that may have been inserted into the target model in the meantime (i.e. information that is not relevant to the model transformation) is not lost.

history

Triple-graph grammars are an extension of pair grammars . They were developed in 1993 by Andy Schürr and published for the first time in 1994 in the scientific publication "Specification of Graph Translators with Triple Graph Grammars". In contrast to pair grammars and EBNF-based approaches, they also allow productions that are no longer context-free : With the help of the correspondence graph, reference can be made explicitly to previous rule applications (“ context-sensitive ”).

While the awareness of TGGs was still limited in the first few years after publication, the number of scientific publications on the topic increased significantly around 2000. The first publication was cited by over 400 publications by 2011.

Since the introduction of TGGs, numerous changes and additions to the concept have been proposed. A variety of systems and implementations exist that are based on TGGs or modifications thereof.

As a solution for model-to-model translations, it competes with the Object Management Group's QVT standard . QVT relations, part of the QVT specification, are very similar to TGGs.

TGG tools

So far, programs for model transformations with TGGs have mainly been developed at universities. There are several approaches that differ in terms of orientation and technology.

Surname Manufacturer License Remarks
TGG interpreter University of Paderborn EPL (open source) Eclipse / EMF-based, TGG rules are interpreted at runtime
eMOFLON TU Darmstadt LGPL (open source) Eclipse / EMF based
Fujaba University of Paderborn u. a. LGPL (open source) Development of the TGG sub-project discontinued
MDELab , plug-in MoTE Hasso Plattner Institute CPL (open source) Eclipse / EMF based, TGGs are translated into story diagrams for execution
EMorF Solunar GmbH EPL (open source) Eclipse / EMF-based, TGG rules are compiled and interpreted at runtime
HenshinTGG Université du Luxembourg u. a. ? Eclipse / EMF based

Related approaches

Other declarative model transformation languages ​​such as Tefkat or MOF QVT-R have similarities to TGGs. As a hybrid transformation language, ATL also has declarative constructs in addition to imperative components.

literature

  • A. Schürr: Specification of Graph Translators with Triple Graph Grammars . In: Proceedings of the 20th International Workshop on Graph-Theoretic Concepts in Computer Science . Lecture Notes In Computer Science. Vol. 903, 1994, doi : 10.1007 / 3-540-59071-4_45 ( PDF file; 75 KB ).
  • A. Königs, A. Schürr: Tool Integration with Triple Graph Grammars - A Survey . In: Electronic Notes in Theoretical Computer Science . Vol. 148, 2006, pp. 113–150 , doi : 10.1016 / j.entcs.2005.12.015 .
  • H. Giese, R. Wagner: From model transformation to incremental bidirectional model synchronization . In: Software and Systems Modeling . Vol. 8, No. 1 , 2009, p. 23-43 , doi : 10.1007 / s10270-008-0089-9 .
  • J. Greenyer, E. Kindler: Comparing relational model transformation technologies: implementing Query / View / Transformation with Triple Graph Grammars . In: Software and Systems Modeling . Vol. 9, No. 1 , 2010, p. 21-46 , doi : 10.1007 / s10270-009-0121-8 .

Individual evidence

  1. J. Greenyer, S. Pook, J. Rieke: Preventing Information Loss in Incremental Model Synchronization by Reusing Elements . In: Modeling Foundations and Applications . Lecture Notes In Computer Science. Vol. 6698, 2011, pp. 144–159 , doi : 10.1007 / 978-3-642-21470-7_11 .
  2. Citations of the publication "Specification of Graph Translators with Triple Graph Grammars". Google Scholar, accessed November 1, 2011 .
  3. J. Greenyer, E. Kindler: Comparing relational model transformation technologies: implementing Query / View / Transformation with Triple Graph Grammars . In: Software and Systems Modeling . Vol. 9, No. 1 , 2010, p. 21-46 , doi : 10.1007 / s10270-009-0121-8 .