Head-driven Phrase Structure Grammar

from Wikipedia, the free encyclopedia

The Head-Driven Phrase Structure Grammar (HPSG) is a grammar theory that emerged in the 1980s on the basis of the revival of context-free phrase structure grammars as Generative Grammar Theory from the family of unification grammars . In HPSG, grammatical rules are formulated as restrictions that correct sentences and clauses must meet; Transformation rules therefore do not apply. The entire information about a linguistic character is summarized in a single feature description. In contrast to some other grammar theories, word-specific information is given in full in the lexicon, so that only a few grammar rules are necessary.

Like all phrase structure grammars, the HPSG is a constituent grammar . So it is based on the principle of constituency, in contrast to a dependency grammar .

history

The Head-Driven Phrase Structure Grammar was developed by Carl Pollard and Ivan Sag from the mid-1980s. Essential components were inspired or adopted by older syntax theories, especially non-derivative approaches, for example categorical grammar (CG), Generalized Phrase Structure Grammar (GPSG), Arc Pair Grammar (AC), Lexical-Functional Grammar (LFG), but also the then prevailing Rektions- and attachment theory (Government and binding theory, GB) Noam Chomsky. The representation of the semantics is partly based on the situation semantics ; formal principles come from computer science . Carl Pollard and Ivan Sag provided the first comprehensive presentation of the theory with the book Information-Based Syntax and Semantics, Volume I , published in 1987 (Pollard, Sag 1987); they presented a revised version in 1994 in the work Head-Driven Phrase Structure Grammar (Pollard, Sag 1994). Right from the start, other scholars took up the Head-Driven Phrase Structure Grammar and suggested modifications, extensions and applications in different languages. There is therefore a multitude of views on many questions that are expressed by different scientists to describe different languages. Some grammar theories, which take the middle position between HPSG and other theories, have also been developed, for example the Sign-Based Construction Grammar , which takes up ideas of construction grammar within the HPSG formalism .

Concepts

Basic concepts

In HPSG, all words and phrases are modeled as characters in the sense of Ferdinand de Saussure, i.e. as form-meaning pairs. Syntactic properties, the sound structure and the meaning of a character are represented in a single attribute-value matrix , which is why HPSG is considered monostratal . The attribute-value matrix of each character contains at least one feature PHON, which represents the phoneme sequence , and a value SYNSEM, which summarizes information about grammatical properties and the meaning in a matrix of the synsem type . There are also suggestions for the formal representation of further aspects of a language in attribute-value matrices, for example the word order (see the section Word order ) and the syllable structure .

In contrast to many other theories of grammar, HPSG is declarative: the entire grammar including the lexicon is formulated as a description of grammatically correct characters. Therefore, there are no rules in HPSG for changing or moving constituents . Instead, grammatical rules are expressed solely in terms of constraints that must be met by well-formed characters. An example of this is the congruence of a verb with its subject . While in non-restriction-based grammar theories, for example, the transfer of a feature from the subject to the verb is assumed, in HPSG the verb and the subject have corresponding features which, according to certain restrictions, must be the same in both characters.

HPSG is also a largely lexicalized grammar theory, that is, the grammatical information is to a large extent stored in the lexicon , the grammar itself only has to provide a few restrictions for processing the lexicon. For example, the arguments of a verb are specified in lists that are included in the feature description of the verb; the grammar then sets restrictions on how the arguments are implemented.

Formal basics

All information about a character is specified in HPSG in a hierarchically structured attribute-value matrix ( attribute-value matrix , AVM for short). The corresponding value for a certain attribute is specified in each line. Each value has a certain type and can have its own characteristics. The type determines which characteristics an object has and which types have the corresponding values. For example, in the formalism of Pollard and Sag 1994, every object of the type synsem has a property LOCAL with an object of the type local as a value and a property NONLOC with a value of the type nonloc . The types form a hierarchy , with subtypes inheriting the characteristics of their supertypes. Types are usually shown in italics on the left in the graphic representation. For example, the object represented by the following matrix has the type index and the characteristics PERSON, NUMBER and GENDER. The associated values ​​are of types 2 , sg and fem and have no characteristics of their own here:

Lists and sets of objects are also permitted as values . For example, the SUBCAT feature requires a list of objects of type synsem as a value :

In the graphic representation of attribute-value matrices, it must be taken into account that mostly only the characteristics of a matrix that are necessary for the respective question are shown. In addition, longer paths are often abbreviated with "|" in the literature. Therefore the following two matrices are synonymous:

An HPSG-based description of a language has at least the following formal components:

  • A signature that defines which types are available, which characteristics they have and which types have their values
  • Principles that formulate restrictions that must apply to all well-formed signs
  • Functions and relations, for example for calculating morphological forms and for joining lists

The lexicon is either expressed by restrictions on the word type or it is given its own status outside of the restrictions.

Restrictions may be using unterspezifierten feature descriptions formulate, then in applying the limitation to a characteristic description of this be unified must be to ensure that the restriction is met. Take the head feature principle as an example, which stipulates that in every phrasal sign that has a head, the HEAD value must be the same as that of the head daughter. This can be formulated as an implication (the use of quantifiers is not mandatory in the HPSG literature):

The encyclopedia

In HPSG, the lexicon consists of descriptions for the words of a language, the so-called lexicon entries. For this purpose, a disjunction of characteristic descriptions can be used for each individual word:

word → (human) ∨ (house) ∨ ...
This means: Every object of the type word fulfills either the characteristic description for "human" or that for "house" or that for ...

In order to enable generalizations, words can be divided into word classes which cover those characteristics which are common to all words in a word class. Nouns such as woman , sun , cat fulfill the more general scheme for feminine nouns in the singular : in the analysis by Müller 1999, for example, their head features are of the noun type , they require an article as a complement and have the same values ​​for person, number and gender.

Some syntactic and especially morphological phenomena are covered by so-called lexicon rules that license lexicon entries by relating them to other entries. For example, in a simplified grammar, passive verb forms could be licensed by licensing for each transitive verb a passive verb whose subject matches the object of the transitive verb. In many HPSG-based theories, the operator "↦" is used, which connects the description of the source word and the word licensed by the rule:

Lexicon rules are formulated - depending on the theoretical approach - either as meta rules for describing the lexicon or as restrictions on words within the formalism of HPSG.

Semantics and pragmatics

The meaning of a character is specified in a synsem object in a matrix, which is usually called CONTENT and usually has one of several subtypes of content , each with its own characteristics. Pollard and Sag 1994, for example, provide for the types psoa , nom-obj and quant , while Bender, Sag and Wasow 2003, for example, assume a uniform feature geometry for all CONTENT (for them SEM) values.

To illustrate the situation semantics in most HPSG theories on so-called parameterized issues (English of affairs parametrized state , briefly Psoas ) resorted to the matrices of the type Psoas are presented. Parameterized facts consist of a relation such as see , hit , book , person and parameters that indicate different semantic roles in the relation. In HPSG, the parameterized facts are also represented in attribute-value matrices. The relation “the man sees the dog” according to Müller 2007 can be represented by the following matrices:

The first matrix is ​​the psoa representation of the relation see with two arguments. The second matrix represents the CONTENT value of the description of man . Its index is identified by the tag [1] with the agent of see , the RESTRICTIONS set specifies that [1] is a man. The same applies to the third matrix, which is in the description of Hund .

The semantic roles can also be linked to syntactic functions by dividing the structure. The following extract from the LOCAL value of "see" identifies the indices of the two noun phrases in the SUBCAT list (see the section Complements ) with the agent or patient (feature geometry according to Müller 2007):

In more recent HPSG-based theories, other theories such as Minimal Recursion Semantics (MRS) and Lexical Resource Semantics (LRS) are also used, which can also be represented with attribute-value matrices.

Context information is stored in a matrix of the context type under the path SYNSEM | LOC | CONTEXT, which has characteristics such as BACKGROUND and C-INDICES. BACKGROUND is a set of psoa objects that provide background information on how the sentence is pronounced. C-INDICES has several attributes that provide information about the circumstances of speaking in the form of indices, for example the speaker, the addressee and the place.

Structures with heads

In HPSG it is mostly assumed that phrasal signs are composed of a head daughter and a certain number of non-head daughters. The composition of such a structure is primarily determined by grammar principles and the characteristics of the daughters. Early research in particular tried to get along with the lowest possible number of very general types of constructions, so-called ID schemes ( immediate dominance schemes ). There are six ID schemes in the Pollard and Sag 1994 grammar, including, for example, one for connecting head and complement and one for connecting head and adjunct. More recent versions, especially those closely related to construction grammar , often contain very numerous and specific constructions. Ginzburg and Sag 2000 develop 23 ID schemes. It is also controversial whether certain phenomena should be described with ID schemes that do not have a non-head daughter but only a head daughter. Such suggestions have been made, for example, for the introduction of non-local information without the use of traces (see under Non-local information ). The following section concentrates more on approaches which Pollard and Sag 1994 assume, based on a small number of ID schemes and strong lexicalization.

Composition of the semantics

In HPSG versions that follow Pollard and Sag 1994, it is assumed that the semantics of a phrase are in most cases identical to that of the head daughter. On the other hand, the semantics should be determined by the adjunct daughter if it is a structure with adjunct. The following SYNSEM values ​​result for the head-adjunct structure "red book" and its daughters:

  • "Red book"

  • "Book":

  • "Red":

The feature HEAD contains information that is common to the head and its phrasal projections, for example case, number and gender in noun phrases . The head feature principle requires that the HEAD feature of a phrase is identical to that of its head.

Types of non-head daughters

Depending on the approach, a distinction is made between different types of non-head daughters. This paragraph can therefore only provide examples.

Complements

Complement daughters are lexically determined by their head within the framework of the valence . Information about the valence of a character is stored in one or more lists, such as the SUBCAT attribute, under the path SYNSEM | LOC. They mostly contain the SYNSEM objects of arguments of the character that have not yet been bound. Depending on the theoretical approach, the following excerpts from a feature description could be formulated for a verb such as "see":

  • With SUBCAT feature

  • With separate lists for subject and complements

A grammar principle determines how complements are tied off. Assuming a SUBCAT list, it can be formulated as follows:

In a head complement phrase, the SUBCAT value of the head daughter is the link between the SUBCAT list of the phrase and the list of SYNSEM values ​​of the complement children.
Adjuncts

According to Pollard and Sag 1994, adjuncts select their heads. For this purpose, they receive a header characteristic MODIFIED, or MOD for short, which is identified by structure division with the SYNSEM value of the header, as is determined by an ID scheme in which header-adjunct structures are defined. In addition, when situation semantics (see Semantics and Pragmatics ) are used, it is assumed that the semantics of the adjunct daughter is identical to that of the mother, which is why this restriction is included in the semantics principle. This makes it possible to grasp the semantics of phrases with so-called encapsulating modification . A number of more recent works, on the other hand, assume that adjuncts like complements are determined by the head's own valence list; this approach is also known as the adjunct-as-complement approach .

Other

In order to analyze various phenomena, numerous other types of non-head daughters were introduced. So have Pollard and Sag 1994 words as the English Conjunction did own word class suggested the so-called markers (English marker ). According to their head-marker scheme, markers use the SPEC feature to select the SYNSEM value of the head; in addition, the MARKING feature has the value marked for them and their mother .

The analysis of quantifiers introduced by Pollard and Sag in 1994 is a formal peculiarity . Their scope properties are modeled with a Cooper store , which contains the semantics of the quantifier and which is passed upwards from the quantifier until it is tied due to a restriction becomes.

Non-local information

For connections between nodes that are further apart in the feature description of the higher-level phrase, HPSG uses so-called non - local features , which contain information necessary for the remote connection ( unbounded dependency ) and are passed on from node to node in order to make the information relevant to both Provide nodes. A grammar principle ensures that non-local values ​​are passed on until they are tied for a specific reason.

This enables, for example, the analysis of the extraction of noun phrases. In many HPSG approaches as will be adopted in other grammatical theories that the corresponding noun phrase to its actual position, a track (English trace that is coindexed with the extracted phrase, and differs from other words characterized) leaves that their PHON value is empty . Examples from English are:

  • John 1 , Mary loves _ 1
Without extraction: Mary loves John
  • I wonder who 1 Sandy loves _ 1
Without extraction Sandy loves ...

Pollard and Sag 1994 and others have proposed a feature description for tracks, according to which their LOCAL value, in which all local information about syntax and semantics is stored, is identified with an element of the non-local SLASH list, which is then identified as follows by the grammar principle mentioned long is passed on until it is tied again. Other analyzes use phrases with only one daughter or lexicon rules to introduce the non-local information at a node without the need for a blank character.

Attachment theory

The attachment theory makes statements about whether noun phrases as reflexive pronouns can be realized as personal pronouns or as Nichtpronomina. The proposed attachment theories assume that two conditions must be fulfilled for a noun phrase to appear anaphoric , i.e. as a reflexive in classical grammar:

  • the noun phrase must have a certain relationship with another
  • these two noun phrases must have the same INDEX value

The problem of defining this relationship is solved differently by the various approaches, but the decisive factor in HPSG is always that the anaphoric noun phrase, its mother, or its projection is more oblique than the other noun phrase with which it is co-indexed.

Constituent position

The constituent position of a language can be expressed in HPSG by further restrictions, the so-called Linear Precedence Rules , LP rules for short , which define the form

X <Y
or
Y> X

to have. X <Y means that the constituent X comes before the constituent Y, X> Y stands for the opposite position. In languages ​​where heads are at the end of a phrase, the rule applies

Head daughter <non head daughter

Languages ​​with more complex or freer word order like German require more complicated rules, for example in the form of a disjunction of several LP rules.

For languages ​​with particularly free word order, there are several analysis proposals that go beyond the formulation of complex LP rules. A free arrangement of the complements has been explained by some scientists with lexicon rules for rearranging the valence lists or by assuming unordered valence sets instead of lists; flat structure trees have also been considered. In a number of papers, however, it is suggested not to remove elements of the valence lists after they have been tied, but rather to reach further up so that characters can still access them from another location.

Another approach assumes that the word order is not directly related to the syntactic structure, i.e. that constituents do not have to be continuous. For this purpose, it is assumed that the daughters of a phrasal character are collected in a so-called linearization domain in the DOM feature, to which LP rules are then applied. When two domains are merged, they can be joined together to form the mother's domain using the shuffle operation, in which the position of the characters from one domain relative to one another is retained; however, a domain can also be compact, so that no foreign characters can stand between characters of this domain.

Implementation and application in computational linguistics

Since the beginning of the 1990s, various systems for implementing HPSG grammars have been developed in computational linguistics . Only a small part of the HPSG-based architectures implement the formalism directly by applying the restrictions formulated in the theory to every linguistic object. Since efficiency problems can arise in such systems, other implementations encode some of the constraints as relations with linguistic objects as arguments, with phrase structure rules playing an important role, which, although not present in HPSG, facilitate efficient parsing. Some implementations also allow parsing with discontinuous constituents, which play a role in certain HPSG grammars (see word order ). HPSG-based systems play a role in research in the field of deep processing , where they can also be combined with methods of shallow processing .

literature

Overview works and introductions

Individual problems

Web links

Overview, bibliographies, other materials

Implementations

Individual evidence

  1. Ivan A. Sag: Sign-Based Construction Grammar. An informal synopsis. (PDF; 818 kB) .
  2. ^ Jesse Tseng: The representation of syllable structure in HPSG. In: Stefan Müller (Ed.): Proceedings of the HPSG08 Conference. Page 234–252. CSLI Publications, Stanford 2008
  3. Müller 2007, page 68
  4. A more detailed description of CONTEXT can be found at: Georgia M. Green: The Structure of CONTEXT: The Representation of Pragmatic Restrictions in HPSG. In: James Yoon (Ed.): Studies in the Linguistic Sciences. Proceedings of the 5th annual meeting of the Formal Linguistics Society of the Midwest. 1996
  5. Another example of a strongly phrasal-based approach is Petter Haugereid: Decomposed Phrasal Constructions. In: Stefan Müller (Ed.): Proceedings of the HPSG07 Conference. CSLI Publications, Stanford 2007
  6. Formulation based on Pollard and Sag 1994
  7. Examples: Müller 1999, page 229 with unary ID schemes; Gosse Bouma, Robert Malouf, Ivan Sag: Satisfying Constraints on Extraction and Adjunction ( Memento of the original from August 20, 2008 in the Internet Archive ) Info: The archive link was automatically inserted and not yet checked. Please check the original and archive link according to the instructions and then remove this notice. (PDF; 292 kB). In: Natural Language and Linguistic Theory, 19, 1, pp. 1-65.  @1@ 2Template: Webachiv / IABot / ftp-linguistics.stanford.edu
  8. ^ Adam Przepiórkowski: Case Assignment and the Complement-Adjunct Dichotomy: A Non-Configurational Constraint-Based Approach. Ph.D.thesis, University of Tübingen, Tübingen 1999; Tibor Kiss: Semantic Constraints on Relative Clause Extraposition. In: Natural Language and Linguistic Theory 23 (2005), pages 281-334; Emily M. Bender: Radical Non-Configurationality without Shuffle Operators: An Analysis of Wambaya. In: Stefan Müller (Ed.): Proceedings of the HPSG08 Conference, pages 7-24. CSLI Publications, Stanford 2008
  9. Mike Reape: Domain Union and Word Order Variation in German. In: John Nerbonne, Klaus Netter, Carl Pollard: German in Head-Driven Phrase Structure Grammar. CSLI Lecture Notes 146. Pages 151–198. CSLI Publications, Stanford 1994