Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings









(8 pages). "Abstract Meaning Representation for Sembanking" (L. Banarescu, C. Bonial, S. Cai, M. Georgescu, K. Griffitt, U. Hermjakob, K. Knight, P. Koehn, M. Palmer, N. Schneider), Proc. Linguistic Annotation Workshop, 2013. , also in (57 pages).

As a simple example, we represent "The boy wants to go" as:

Note that the variable appears twice, once as the of , and once as the of . This same AMR can also be viewed as a feature structure graph:

We have also developed , a packed representation for storing an exponential number of AMRs.

Help | Advanced Search

Computer Science > Computation and Language

Title: massive multilingual abstract meaning representation: a dataset and baselines for hallucination detection.

Abstract: Abstract Meaning Representation (AMR) is a semantic formalism that captures the core meaning of an utterance. There has been substantial work developing AMR corpora in English and more recently across languages, though the limited size of existing datasets and the cost of collecting more annotations are prohibitive. With both engineering and scientific questions in mind, we introduce MASSIVE-AMR, a dataset with more than 84,000 text-to-graph annotations, currently the largest and most diverse of its kind: AMR graphs for 1,685 information-seeking utterances mapped to 50+ typologically diverse languages. We describe how we built our resource and its unique features before reporting on experiments using large language models for multilingual AMR and SPARQL parsing as well as applying AMRs for hallucination detection in the context of knowledge base question answering, with results shedding light on persistent issues using LLMs for structured parsing.
Subjects: Computation and Language (cs.CL)
Cite as: [cs.CL]
  (or [cs.CL] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Abstract meaning representation for legal documents: an empirical research on a human-annotated dataset

  • Original Research
  • Published: 07 July 2021
  • Volume 30 , pages 221–243, ( 2022 )

Cite this article

abstract meaning representation

  • Sinh Trong Vu 1 ,
  • Minh Le Nguyen 1 &
  • Ken Satoh 2  

1134 Accesses

1 Altmetric

Explore all metrics

Natural language processing techniques contribute more and more in analyzing legal documents recently, which supports the implementation of laws and rules using computers. Previous approaches in representing a legal sentence often based on logical patterns that illustrate the relations between concepts in the sentence, often consist of multiple words. Those representations cause the lack of semantic information at the word level. In our work, we aim to tackle such shortcomings by representing legal texts in the form of abstract meaning representation (AMR), a graph-based semantic representation that gains lots of polarity in NLP community recently. We present our study in AMR Parsing (producing AMR from natural language) and AMR-to-text Generation (producing natural language from AMR) specifically for legal domain. We also introduce JCivilCode, a human-annotated legal AMR dataset which was created and verified by a group of linguistic and legal experts. We conduct an empirical evaluation of various approaches in parsing and generating AMR on our own dataset and show the current challenges. Based on our observation, we propose our domain adaptation method applying in the training phase and decoding phase of a neural AMR-to-text generation model. Our method improves the quality of text generated from AMR graph compared to the baseline model. (This work is extended from our two previous papers: “An Empirical Evaluation of AMR Parsing for Legal Documents”, published in the Twelfth International Workshop on Juris-informatics (JURISIN) 2018; and “Legal Text Generation from Abstract Meaning Representation”, published in the 32nd International Conference on Legal Knowledge and Information Systems (JURIX) 2019.).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

abstract meaning representation

Similar content being viewed by others

abstract meaning representation

An Empirical Evaluation of AMR Parsing for Legal Documents

abstract meaning representation

Bringing order into the realm of Transformer-based language models for artificial intelligence and law

abstract meaning representation

JNLP Team: Deep Learning Approaches for Tackling Long and Ambiguous Legal Documents in COLIEE 2022

We keep the original trained models without retrained on the new dataset LDC2017T10

Abend O, Rappoport A (2013) Universal conceptual cognitive annotation (UCCA). In: Proceedings of the 51st annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 228–238. Association for Computational Linguistics, Sofia, Bulgaria. https://www.aclweb.org/anthology/P13-1023

Ballesteros M, Al-Onaizan Y (2017) AMR parsing using stack-LSTMs. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp. 1269–1275. Association for Computational Linguistics, Copenhagen, Denmark. https://doi.org/10.18653/v1/D17-1130

Banarescu L, Bonial C, Cai S, Georgescu M, Griffitt K, Hermjakob U, Knight K, Koehn P, Palmer M, Schneider N (2013) Abstract meaning representation for sembanking. In: Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pp. 178–186. Association for Computational Linguistics, Sofia, Bulgaria

Basile V, Bos J, Evang K, Venhuizen N (2012) Developing a large semantically annotated corpus. In: Proceedings of the eighth international conference on language resources and evaluation (LREC’12), pp. 3196–3200. European Language Resources Association (ELRA), Istanbul, Turkey. http://www.lrec-conf.org/proceedings/lrec2012/pdf/534_Paper.pdf

Brandt L, Grimm D, Zhou M, Versley Y (2016) ICL-HD at SemEval-2016 task 8: meaning representation parsing-augmenting AMR parsing with a preposition semantic role labeling neural network. In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp. 1160–1166. Association for Computational Linguistics, San Diego, California. https://doi.org/10.18653/v1/S16-1179

Cai S, Knight K (2013) Smatch: an evaluation metric for semantic feature structures. In: Proceedings of the 51st annual meeting of the association for computational linguistics (Volume 2: Short Papers), pp. 748–752. Association for Computational Linguistics, Sofia, Bulgaria

Cao K, Clark S (2019) Factorising AMR generation through syntax. In: Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, Volume 1 (Long and Short Papers), pp. 2157–2163. Association for Computational Linguistics, Minneapolis, Minnesota. https://doi.org/10.18653/v1/N19-1223

Damonte M, Cohen SB (2019) Structural neural encoders for AMR-to-text generation. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, Volume 1 (Long and Short Papers), pp. 3649–3658. Association for Computational Linguistics, Minneapolis, Minnesota. https://doi.org/10.18653/v1/N19-1366

Damonte M, Cohen SB, Satta G (2017) An incremental parser for abstract meaning representation. In: Proceedings of European chapter of the ACL (EACL)

Denkowski M, Lavie A (2014) Meteor universal: Language specific translation evaluation for any target language. In: Proceedings of the EACL 2014 workshop on statistical machine translation

Dohare S, Karnick H, Gupta V (2017) Text summarization using abstract meaning representation. arXiv preprint arXiv:1706.01678

Dong L, Lapata M (2016) Language to logical form with neural attention. In: Proceedings of the 54th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 33–43. Association for Computational Linguistics, Berlin, Germany. https://doi.org/10.18653/v1/P16-1004 . https://www.aclweb.org/anthology/P16-1004

Fan A, Grangier D, Auli M (2018) Controllable abstractive summarization. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 45–54. Association for Computational Linguistics, Melbourne, Australia. https://doi.org/10.18653/v1/W18-2706 . https://www.aclweb.org/anthology/W18-2706

Flanigan J, Thomson S, Carbonell J, Dyer C, Smith NA (2014) A discriminative graph-based parser for the abstract meaning representation. In: Proceedings of the 52nd annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 1426–1436. Association for Computational Linguistics, Baltimore, Maryland. https://doi.org/10.3115/v1/P14-1134

Foland W, Martin JH (2017) Abstract meaning representation parsing using LSTM recurrent neural networks. In: Proceedings of the 55th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 463–472. Association for Computational Linguistics, Vancouver, Canada. https://doi.org/10.18653/v1/P17-1043

Ge D, Li J, Zhu M, Li S (2019) Modeling source syntax and semantics for neural amr parsing. In: Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI-19, pp. 4975–4981. International joint conferences on artificial intelligence organization. https://doi.org/10.24963/ijcai.2019/691 . https://doi.org/10.24963/ijcai.2019/691

Ghazvininejad M, Shi X, Priyadarshi J, Knight K (2017) Hafez: an interactive poetry generation system. In: Proceedings of ACL 2017, system demonstrations, pp. 43–48. Association for computational linguistics, Vancouver, Canada. https://www.aclweb.org/anthology/P17-4008

Goodman J, Vlachos A, Naradowsky J (2016) Noise reduction and targeted exploration in imitation learning for abstract meaning representation parsing. In: Proceedings of the 54th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 1–11. Association for Computational Linguistics, Berlin, Germany. https://doi.org/10.18653/v1/P16-1001

Gu J, Lu Z, Li H, Li VO (2016) Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 1631–1640. Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-1154 . http://aclweb.org/anthology/P16-1154

Hardy H, Vlachos A (2018) Guided neural language generation for abstractive summarization using abstract meaning representation. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp. 768–773. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/D18-1086

Jones B, Andreas J, Bauer D, Hermann KM, Knight K, (2012) Semantics-based machine translation with hyperedge replacement grammars. In: Proceedings of COLING 2012, pp. 1359–1376. The COLING, (2012) Organizing Committee. Mumbai, India

Katayama T (2007) Legal engineering-an engineering approach to laws in e-society age. In: Proceedings of the 1st international workshop on JURISIN

Konstas I, Iyer S, Yatskar M, Choi Y, Zettlemoyer L (2017) Neural AMR: Sequence-to-sequence models for parsing and generation. In: Proceedings of the 55th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 146–157. Association for Computational Linguistics, Vancouver, Canada. https://doi.org/10.18653/v1/P17-1014

Liao K, Lebanoff L, Liu F (2018) Abstract meaning representation for multi-document summarization. In: Proceedings of the 27th international conference on computational linguistics, pp. 1178–1190. Association for Computational Linguistics, Santa Fe, New Mexico, USA

Lin Z, Xue N (2019) Parsing meaning representations: is easier always better? In: Proceedings of the first international workshop on designing meaning representations, pp. 34–43. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/W19-3304

Liu F, Flanigan J, Thomson S, Sadeh N, Smith NA (2015) Toward abstractive summarization using semantic representations. In: Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies, pp. 1077–1086. Association for Computational Linguistics, Denver, Colorado. https://doi.org/10.3115/v1/N15-1114

Liu Y, Che W, Zheng B, Qin B, Liu T (2018) An AMR aligner tuned by transition-based parser. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp. 2422–2430. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/D18-1264

Lyu C, Titov I (2018) AMR parsing as graph prediction with latent alignment. In: Proceedings of the 56th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 397–407. Association for Computational Linguistics, Melbourne, Australia. https://doi.org/10.18653/v1/P18-1037

Mahalunkar A, Kelleher J (2019) Multi-element long distance dependencies: Using SPk languages to explore the characteristics of long-distance dependencies. In: Proceedings of the workshop on deep learning and formal languages: building bridges, pp. 34–43. Association for Computational Linguistics, Florence. https://doi.org/10.18653/v1/W19-3904 . https://www.aclweb.org/anthology/W19-3904

Mitra A, Baral C (2016) Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In: Thirtieth AAAI conference on artificial intelligence

Nakamura M, Nobuoka S, Shimazu A (2007) Towards translation of legal sentences into logical forms. In: Annual conference of the Japanese society for artificial intelligence, pp. 349–362. Springer

Napoles C, Gormley M, Van Durme B (2012) Annotated Gigaword. In: Proceedings of the joint workshop on automatic knowledge base construction and web-scale knowledge extraction (AKBC-WEKEX), pp. 95–100. Association for Computational Linguistics, Montréal, Canada. https://www.aclweb.org/anthology/W12-3018

Naseem T, Shah A, Wan H, Florian R, Roukos S, Ballesteros M (2019) Rewarding Smatch: Transition-based AMR parsing with reinforcement learning. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp. 4586–4592. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/P19-1451

Navas-Loro M, Satoh K, Rodríguez-Doncel V (2019) Contractframes: bridging the gap between natural language and logics in contract law. In: Kojima K, Sakamoto M, Mineshima K, Satoh K (eds) New frontiers in artificial intelligence. Springer International Publishing, Cham, pp 101–114

Chapter   Google Scholar  

van Noord R, Bos J (2017) Neural semantic parsing by character-based translation: experiments with abstract meaning representations. Comput Linguist Netherlands J 7:93–108

Google Scholar  

Papineni K, Roukos S, Ward T, Zhu WJ (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of 40th annual meeting of the association for computational linguistics, pp. 311–318. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA. https://doi.org/10.3115/1073083.1073135 . https://www.aclweb.org/anthology/P02-1040

Peng X, Song L, Gildea D (2015) A synchronous hyperedge replacement grammar based approach for AMR parsing. In: Proceedings of the nineteenth conference on computational natural language learning, pp. 32–41. Association for Computational Linguistics, Beijing, China. https://doi.org/10.18653/v1/K15-1004

Peng X, Wang C, Gildea D, Xue N (2017) Addressing the data sparsity issue in neural AMR parsing. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics: Volume 1, Long Papers, pp. 366–375. Association for Computational Linguistics, Valencia, Spain

Pennington J, Socher R, Manning C (2014) Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics. https://doi.org/10.3115/v1/D14-1162 . http://aclweb.org/anthology/D14-1162

Pourdamghani N, Knight K, Hermjakob U (2016) Generating English from abstract meaning representations. In: Proceedings of the 9th international natural language generation conference, pp. 21–25. Association for Computational Linguistics, Edinburgh, UK. https://doi.org/10.18653/v1/W16-6603

Rao S, Marcu D, Knight K, Daumé III H (2017) Biomedical event extraction using abstract meaning representation. In: BioNLP 2017, pp. 126–135. Association for Computational Linguistics, Vancouver, Canada. https://doi.org/10.18653/v1/W17-2315 . https://www.aclweb.org/anthology/W17-2315

Ribeiro LFR, Gardent C, Gurevych I (2019) Enhancing AMR-to-text generation with dual graph representations. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp. 3174–3185. Association for Computational Linguistics, Hong Kong, China. https://doi.org/10.18653/v1/D19-1314 . https://www.aclweb.org/anthology/D19-1314

Sachan M, Xing E (2016) Machine comprehension using rich semantic representations. In: Proceedings of the 54th annual meeting of the association for computational linguistics (Volume 2: Short Papers), pp. 486–492. Association for Computational Linguistics, Berlin, Germany. https://doi.org/10.18653/v1/P16-2079

Shaw P, Massey P, Chen A, Piccinno F, Altun Y (2019) Generating logical forms from graph representations of text and entities. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp. 95–106. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/P19-1010 . https://www.aclweb.org/anthology/P19-1010

Shaw S, Pajak M, Lisowska A, Tsaftaris SA, O’Neil AQ (2020) Teacher-student chain for efficient semi-supervised histology image classification. arXiv preprint arXiv:2003.08797

Song L, Gildea D, Zhang Y, Wang Z, Su J (2019) Semantic neural machine translation using AMR. Trans Assoc Comput Linguist 7:19–31. https://doi.org/10.1162/tacl_a_00252

Article   Google Scholar  

Song L, Zhang Y, Wang Z, Gildea D (2018) A graph-to-sequence model for AMR-to-text generation. In: Proceedings of the 56th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp. 1616–1626. Association for Computational Linguistics, Melbourne, Australia. https://doi.org/10.18653/v1/P18-1150

Wang C, Pradhan S, Pan X, Ji H, Xue N (2016) CAMR at SemEval-2016 task 8: An extended transition-based AMR parser. In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp. 1173–1178. Association for Computational Linguistics, San Diego, California. https://doi.org/10.18653/v1/S16-1181

Wang C, Xue N, Pradhan S (2015) A transition-based algorithm for AMR parsing. In: Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies, pp. 366–375. Association for Computational Linguistics, Denver, Colorado. https://doi.org/10.3115/v1/N15-1040

Xie Q, Luong MT, Hovy E, Le QV (2020) Self-training with noisy student improves imagenet classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10687–10698

Zhang S, Ma X, Duh K, Van Durme B (2019) AMR parsing as sequence-to-graph transduction. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp. 80–94. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/P19-1009

Zhou J, Xu F, Uszkoreit H, Qu W, Li R, Gu Y (2016) AMR parsing with an incremental joint model. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp. 680–689. Association for Computational Linguistics, Austin, Texas. https://doi.org/10.18653/v1/D16-1065

Zhu J, Li J, Zhu M, Qian L, Zhang M, Zhou G (2019) Modeling graph structure in transformer for better AMR-to-text generation. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp. 5462–5471. Association for Computational Linguistics, Hong Kong, China. https://doi.org/10.18653/v1/D19-1548 . https://www.aclweb.org/anthology/D19-1548

Download references

Acknowledgments

This work was supported by JST CREST Grant Number JPMJCR1513 Japan, JSPS Kakenhi Grant Number 20H04295, 20K2046, and 20K20625. The research also was supported in part by the Asian Office of Aerospace R\&D (AOARD), Air Force Office of Scientific Research (Grant no. FA2386-19-1-4041).

Author information

Authors and affiliations.

Japan Advanced Institute of Science and Technology, Ishikawa, 923-1292, Japan

Sinh Trong Vu & Minh Le Nguyen

National Institute of Informatics, Tokyo, 100-0003, Japan

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Sinh Trong Vu or Minh Le Nguyen .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Vu, S.T., Le Nguyen, M. & Satoh, K. Abstract meaning representation for legal documents: an empirical research on a human-annotated dataset. Artif Intell Law 30 , 221–243 (2022). https://doi.org/10.1007/s10506-021-09292-6

Download citation

Accepted : 24 June 2021

Published : 07 July 2021

Issue Date : June 2022

DOI : https://doi.org/10.1007/s10506-021-09292-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Abstract meaning representation
  • Deep neural network
  • Legal document
  • Find a journal
  • Publish with us
  • Track your research

Abstract Meaning Representation parser

Description.

Abstract Meaning Representation (AMR; link ) is a semantic representation language. An AMR is a single-rooted, directed graph representing the meaning of a sentence.

This is a demo for AMREager, an AMR parser that works by processing a given sentence left-to-right, similarly to transition-based dependency parsers.

We thank our funders: Bloomberg and EU H2020 (grant agreement: 688139, SUMMA).

Graph visualization for this demo is done using AMRICA .

AMREager’s source code can be found on github . The multilingual parser can be found on github as well.

We also have developed a set of evaluation metrics for AMR.

The parser is described in detail in the paper that can be downloaded here .

The multilingual parser is described in detail in the following paper that can be found here .

Language: English Chinese German Italian Spanish (non-English parsers are still experimental)

Our parser also gives alignments in a comment below the sentence. Alignments should be interpreted as follows. For example,

means that the second token (x-y refers to the tokens that come between words x and y exclusive of y) aligned with the first child of the root node (0.0) and the second child of the root node (0.1). The address 0.x.y… refers to the path from the root to its xth child, then yth child, until the end of the path.

A full alignment would be an output such as

appearing after the sentence line.

A bstract M eaning R epresentation for Gesture

Richard Brutti , Lucia Donatelli , Kenneth Lai , James Pustejovsky

Export citation

  • Preformatted

Markdown (Informal)

[Abstract Meaning Representation for Gesture](https://aclanthology.org/2022.lrec-1.169) (Brutti et al., LREC 2022)

  • Abstract Meaning Representation for Gesture (Brutti et al., LREC 2022)
  • Richard Brutti, Lucia Donatelli, Kenneth Lai, and James Pustejovsky. 2022. Abstract Meaning Representation for Gesture . In Proceedings of the Thirteenth Language Resources and Evaluation Conference , pages 1576–1583, Marseille, France. European Language Resources Association.

Abstract Meaning Representation (AMR) Annotation Release 2.0

Abstract Meaning Representation (AMR) Annotation Release 2.0
Kevin Knight, Bianca Badarau, Laura Baranescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, Tim O'Gorman, Nathan Schneider
LDC2017T10
1-58563-802-1
335-339-972-504-9
June 15, 2017
2017
Text
discussion forum, broadcast conversation, weblogs, web collection, newswire
BOLT, DEFT, GALE, ACE
coreference resolution, entity extraction, information extraction, semantic role labelling
English
eng

Knight, Kevin , et al. Abstract Meaning Representation (AMR) Annotation Release 2.0 LDC2017T10. Web Download. Philadelphia: Linguistic Data Consortium, 2017.







Introduction

Abstract Meaning Representation (AMR) Annotation Release 2.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc. , the University of Colorado's Computational Language and Educational Research group and the Information Sciences Institute at the University of Southern California. It contains a sembank (semantic treebank) of over 39,260 English natural language sentences from broadcast conversations, newswire, weblogs and web discussion forums.

AMR captures “who is doing what to whom” in a sentence. Each sentence is paired with a graph that represents its whole-sentence meaning in a tree-structure. AMR utilizes PropBank frames, non-core semantic roles, within-sentence coreference, named entity annotation, modality, negation, questions, quantities, and so on to represent the semantic structure of a sentence largely independent of its syntax.

LDC also released Abstract Meaning Representation (AMR) Annotation Release 1.0 ( LDC2014T12 ).

The source data includes discussion forums collected for the DARPA BOLT and DEFT programs, transcripts and English translations of Mandarin Chinese broadcast news programming from China Central TV, Wall Street Journal text, translated Xinhua news texts, various newswire data from NIST OpenMT evaluations and weblog data used in the DARPA GALE program. The following table summarizes the number of training, dev, and test AMRs for each dataset in the release. Totals are also provided by partition and dataset:

Dataset Training Dev Test Totals
BOLT DF MT 1061 133 133 1327
Broadcast conversation 214 0 0 214
Weblog and WSJ 0 100 100 200
BOLT DF English 6455 210 229 6894
DEFT DF English 19558 0 0 19558
Guidelines AMRs 819 0 0 819
2009 Open MT 204 0 0 204
Proxy reports 6603 826 823 8252
Weblog 866 0 0 866
Xinhua MT 741 99 86 926
Totals 36521 1368 1371 39260

For those interested in utilizing a standard/community partition for AMR research (for instance in development of semantic parsers), data in the "split" directory contains 39,260 AMRs split roughly 93%/3.5%/3.5% into training/dev/test partitions, with most smaller datasets assigned to one of the splits as a whole. Note that splits observe document boundaries. The "unsplit" directory contains the same 39,260 AMRs with no train/dev/test partition.

Please view this sample .

None at this time.

Acknowledgements

From University of Colorado

We gratefully acknowledge the support of the National Science Foundation Grant NSF: 0910992 IIS:RI: Large: Collaborative Research: Richer Representations for Machine Translation and the support of Darpa BOLT - HR0011-11-C-0145 and DEFT - FA-8750-13-2-0045 via a subcontract from LDC. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, DARPA or the US government.

From Information Sciences Institute (ISI)

Thanks to NSF (IIS-0908532) for funding the initial design of AMR, and to DARPA MRP (FA-8750-09-C-0179) for supporting a group to construct consensus annotations and the AMR Editor. The initial AMR bank was built under DARPA DEFT FA-8750-13-2-0045 (PI: Stephanie Strassel; co-PIs: Kevin Knight, Daniel Marcu, and Martha Palmer) and DARPA BOLT HR0011-12-C-0014 (PI: Kevin Knight).

From Linguistic Data Consortium (LDC)

This material is based on research sponsored by Air Force Research Laboratory and Defense Advance Research Projects Agency under agreement number FA8750-13-2-0045. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory and Defense Advanced Research Projects Agency or the U.S. Government.

We gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0184 Subcontract 4400165821. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government.

From Language Weaver (SDL)

This work was partially sponsored by DARPA contract HR0011-11-C-0150 to LanguageWeaver Inc. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA or the US government.

Available Media

  • Web Download

ACM Digital Library home

  • Advanced Search

PAMR: Persian Abstract Meaning Representation Corpus

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, index terms.

Applied computing

Document management and text processing

Computing methodologies

Machine learning

Information systems

Information retrieval

Document representation

Recommendations

A novel unsupervised corpus-based stemming technique using lexicon and corpus statistics.

Word Stemming is a widely used mechanism in the fields of Natural Language Processing, Information Retrieval, and Language Modeling. Language-independent stemmers discover classes of morphologically related words from the ambient ...

Persian POS tagging using probabilistic morphological analysis

Part of speech (POS) tagging as a fundamental task in natural language processing (NLP) has attracted many research efforts and many taggers are developed with different approaches to reach high performance and accuracy. In many complex applications of ...

Unsupervised identification of persian compound verbs

One of the main tasks related to multiword expressions (MWEs) is compound verb identification. There have been so many works on unsupervised identification of multiword verbs in many languages, but there has not been any conspicuous work on Persian ...

Information

Published in.

cover image ACM Transactions on Asian and Low-Resource Language Information Processing

Google, USA

Association for Computing Machinery

New York, NY, United States

Publication History

Permissions, check for updates, author tags.

  • Abstract Meaning Representation
  • low-resource language
  • natural language processing
  • Research-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 166 Total Downloads
  • Downloads (Last 12 months) 166
  • Downloads (Last 6 weeks) 22

View Options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

View options.

View or Download as a PDF file.

View online with eReader .

View this article in Full Text.

Share this Publication link

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

bioRxiv

Pronouns reactivate conceptual representations in human hippocampal neurons

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for D. E. Dijksterhuis
  • For correspondence: [email protected] [email protected]
  • Info/History
  • Supplementary material
  • Preview PDF

During discourse comprehension, every new word adds to an evolving representation of meaning that accumulates over consecutive sentences and constrains the next words. To minimize repetition and utterance length, languages use pronouns, like the word ‘she’, to refer to nouns and phrases that were previously introduced. It has been suggested that language comprehension requires that pronouns activate the same neuronal representations as the nouns themselves. Here, we test this hypothesis by recording from individual neurons in the human hippocampus during a reading task. We found that cells that are selective to a particular noun are later reactivated by pronouns that refer to the cells’ preferred noun. These results imply that concept cells contribute to a rapid and dynamic semantic memory network which is recruited during language comprehension. This study uniquely demonstrates, at the single-cell level, how memory and language are linked.

Competing Interest Statement

The authors have declared no competing interest.

One-Sentence Summary: Pronouns activate neurons in the human hippocampus if they refer to the concepts to which the cells are tuned.

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
  • Animal Behavior and Cognition (5410)
  • Biochemistry (12210)
  • Bioengineering (9142)
  • Bioinformatics (30130)
  • Biophysics (15482)
  • Cancer Biology (12571)
  • Cell Biology (18064)
  • Clinical Trials (138)
  • Developmental Biology (9747)
  • Ecology (14610)
  • Epidemiology (2067)
  • Evolutionary Biology (18770)
  • Genetics (12543)
  • Genomics (17211)
  • Immunology (12305)
  • Microbiology (29030)
  • Molecular Biology (12047)
  • Neuroscience (63185)
  • Paleontology (463)
  • Pathology (1936)
  • Pharmacology and Toxicology (3366)
  • Physiology (5185)
  • Plant Biology (10802)
  • Scientific Communication and Education (1709)
  • Synthetic Biology (3006)
  • Systems Biology (7538)
  • Zoology (1692)

Prescribed mean curvature problems on homogeneous vector bundles

  • Correa, Eder M.

In this paper, we provide a detailed and systematic study of weak (singular) Hermite-Einstein structures on homogeneous holomorphic vector bundles over rational homogeneous varieties. We use standard tools from spectral geometry, Harmonic analysis, and Cartan's highest weight theory to provide a sufficient condition in terms of Fourier series and intersection numbers under which an $L^{2}$-function can be realized as mean curvature of a singular Hermitian structure on an irreducible homogeneous holomorphic vector bundle. We prove that the condition provided is necessary and sufficient for functions that belong to certain interpolation spaces. In the particular case of line bundles over irreducible Hermitian symmetric spaces of compact type, we describe explicitly in terms of representation theory the solutions of the underlying geometric PDE. Also, we establish a sufficient condition in terms of Fourier coefficients and intersection numbers for solvability and convergence of the weak Hermite-Einstein flow on irreducible homogeneous holomorphic vector bundles. As an application of our methods, we describe the first explicit examples in the literature of solutions to several geometric flows, including Donaldson's heat flow, Yang-Mills heat flow, and the gradient flow of Donaldson's Lagrangian, on line bundles over irreducible Hermitian symmetric spaces of compact type. Additionally, we show that every polynomial central charge function gives rise to a weak Donaldson's Lagrangian $\mathcal{M}$. In the particular case of irreducible homogeneous holomorphic vector bundles, we prove that the gradient flow of $\mathcal{M}$ converges to a Hermitian structure which is, up to a gauge transformation, a $Z$-critical Hermitian structure in the large volume limit.

  • Mathematics - Differential Geometry;
  • Mathematics - Algebraic Geometry;
  • Mathematics - Analysis of PDEs;
  • Mathematics - Representation Theory;
  • Mathematics - Spectral Theory

IMAGES

  1. Guide to Abstract Meaning Representation(AMR) to text with TensorFlow

    abstract meaning representation

  2. Abstract Art

    abstract meaning representation

  3. Abstract Meaning Representation (AMR) for the running example A similar

    abstract meaning representation

  4. [PDF] Abstract Meaning Representation for Sembanking

    abstract meaning representation

  5. Abstract Meaning Representation

    abstract meaning representation

  6. Parsing Indonesian Sentence into Abstract Meaning Representation using

    abstract meaning representation

VIDEO

  1. What is Abstract Reasoning?

  2. Abstract Meaning : Definition of Abstract

  3. Abstract Structure

  4. The Power of Abstraction

  5. Unlock the Secrets of "Allegory", What is an Allegory?

  6. Abstract Meaning

COMMENTS

  1. Abstract Meaning Representation

    Abstract Meaning Representation (AMR) is a semantic representation language. AMR graphs are rooted, labeled, directed, acyclic graphs , comprising whole sentences. They are intended to abstract away from syntactic representations, in the sense that sentences which are similar in meaning should be assigned the same AMR, even if they are not ...

  2. Abstract Meaning Representation (AMR)

    AMR Bank is a resource for natural language understanding, generation, and translation research. It provides pairs of sentences and abstract meaning representations (AMRs) for 59,255 sentences in English.

  3. PDF Abstract Meaning Representation

    Learn about AMR, a graph-based representation language for sentence meaning, introduced by Banarescu et al. (2013). See the format, content, and applications of AMR, as well as its limitations and challenges.

  4. amr-guidelines/amr.md at master · amrisi/amr-guidelines · GitHub

    AMR is a graph-based representation of natural language sentences that captures the semantic roles and relations of words. Learn how to use AMR to annotate, process, and generate text with this comprehensive guide.

  5. PDF Abstract Meaning Representation for Sembanking

    Abstract We describe Abstract Meaning Represen-tation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sen-tences. Wehope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural lan-guage understanding and generation, like

  6. Language Abstract Meaning Representation (AMR)

    Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on.

  7. [2405.19285] MASSIVE Multilingual Abstract Meaning Representation: A

    Abstract Meaning Representation (AMR) is a semantic formalism that captures the core meaning of an utterance. There has been substantial work developing AMR corpora in English and more recently across languages, though the limited size of existing datasets and the cost of collecting more annotations are prohibitive. With both engineering and scientific questions in mind, we introduce MASSIVE ...

  8. Expressive Power of Abstract Meaning Representations

    The syntax of abstract meaning representations (AMRs) can be defined recursively, and a systematic translation to first-order logic (FOL) can be specified, including a proper treatment of negation. AMRs without recurrent variables are in the decidable two-variable fragment of FOL. The current definition of AMRs has limited expressive power for ...

  9. Abstract Meaning Representations as Linked Data

    Abstract. The complex relationship between natural language and formal semantic representations can be investigated by the development of large, semantically-annotated corpora. The "Abstract Meaning Representation" (AMR) formulation describes the semantics of a whole sentence as a rooted, labeled graph, where nodes represent concepts ...

  10. PDF A Continuation Semantics for Abstract Meaning Representation

    Abstract Meaning Representation (AMR) is a general-purpose meaning representation that has become popular for its simple structure, ease of annotation and available corpora, and overall expressiveness (Banarescu et al., 2013; Knight et al., 2019). Specifically, AMR focuses on representing the predicative core of a sentence as an intuitive ...

  11. Assessing the Cross-linguistic Utility of Abstract Meaning Representation

    Abstract. Semantic representations capture the meaning of a text. Abstract Meaning Representation (AMR), a type of semantic representation, focuses on predicate-argument structure and abstracts away from surface form. Though AMR was developed initially for English, it has now been adapted to a multitude of languages in the form of non-English annotation schemas, cross-lingual text-to-AMR ...

  12. A Short Review of Abstract Meaning Representation Applications

    Abstract. Abstract Meaning Representation (AMR) is a representation model in which AMRs are rooted and labeled graphs that capture semantics on the sentence level while abstracting away from ...

  13. A Survey on Abstract Meaning Representation

    Abstract Meaning representation (AMR) is a. typical meaning representation framework that represents a sentence's. meaning as a directed graph with concepts as labeled nodes and relations. as directed edges. This survey serves as a systematic review of AMR. First, we compare different methods of producing AMR from linguistic.

  14. An Incremental Parser for Abstract Meaning Representation

    Abstract Meaning Representation (AMR) is a semantic representation for natural language that embeds annotations related to traditional tasks such as named entity recognition, semantic role labeling, word sense disambiguation and co-reference resolution. We describe a transition-based parser for AMR that parses sentences left-to-right, in linear ...

  15. What are Abstract Meaning Representation graphs?

    Dec 5, 2021. If you work in NLP, you might come across a graphical semantic representation known as Abstract Meaning Representations. In this article I will present a high-level introduction to ...

  16. Abstract Meaning Representation

    Abstract Meaning Representation (AMR) First of all, each sentence is a rooted, directed, acyclic graph, where. The intuition for the design is to eliminate the ambiguity caused by syntax or ...

  17. Abstract Meaning Representation (AMR) Annotation Release 3.0

    Introduction. Abstract Meaning Representation (AMR) Annotation Release 3.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University of Colorado's Computational Language and Educational Research group and the Information Sciences Institute at the University of Southern California. It contains a sembank (semantic treebank) of over 59,255 English natural ...

  18. Abstract meaning representation for legal documents: an empirical

    Abstract Meaning Representation is a semantic representation language that encodes the meaning of a sentence as a rooted, directed, edge-labeled, leaf-labeled graph while abstracting away the surface forms in a sentence. Every vertex and edge of the graph are labeled according to the sense of the words in a sentence.

  19. PDF Parsing and Generation for the Abstract Meaning Representation

    The representation we use is the Abstract Meaning Representation (AMR) (Banarescu et al., 2013), which has been designed with the idea of using it as an intermediate repre-sentation in machine translation. AMR represents the meaning of a sentence as labeled nodes in a graph (concepts), and labeled directed edges between them (relations). It

  20. Abstract Meaning Representation parser

    Abstract Meaning Representation (AMR; link) is a semantic representation language.An AMR is a single-rooted, directed graph representing the meaning of a sentence. This is a demo for AMREager, an AMR parser that works by processing a given sentence left-to-right, similarly to transition-based dependency parsers.

  21. Abstract Meaning Representation for Gesture

    Abstract This paper presents Gesture AMR, an extension to Abstract Meaning Representation (AMR), that captures the meaning of gesture. In developing Gesture AMR, we consider how gesture form and meaning relate; how gesture packages meaning both independently and in interaction with speech; and how the meaning of gesture is temporally and contextually determined.

  22. Abstract Meaning Representation (AMR) Annotation Release 2.0

    Introduction. Abstract Meaning Representation (AMR) Annotation Release 2.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University of Colorado's Computational Language and Educational Research group and the Information Sciences Institute at the University of Southern California. It contains a sembank (semantic treebank) of over 39,260 English natural ...

  23. Semantic Parsing using Abstract Meaning Representation

    The general abstract meaning representation is quite effective to capture the "essence" of the meaning of the question. There are a number of approaches for building an AMR parser to produce ...

  24. PAMR: Persian Abstract Meaning Representation Corpus

    One of the most used and well-known semantic representation models is Abstract Meaning Representation (AMR). This representation has had numerous applications in natural language processing tasks in recent years. Currently, for English and Chinese languages, large annotated corpora are available.

  25. Pronouns reactivate conceptual representations in human ...

    Abstract. During discourse comprehension, every new word adds to an evolving representation of meaning that accumulates over consecutive sentences and constrains the next words. To minimize repetition and utterance length, languages use pronouns, like the word 'she', to refer to nouns and phrases that were previously introduced.

  26. Prescribed mean curvature problems on homogeneous vector bundles

    We prove that the condition provided is necessary and sufficient for functions that belong to certain interpolation spaces. In the particular case of line bundles over irreducible Hermitian symmetric spaces of compact type, we describe explicitly in terms of representation theory the solutions of the underlying geometric PDE.