Querying the Perseus Ancient Greek and Latin Treebank Data in ANNIS

We are pleased to announce the availability of the Perseus ANNIS Environment for searching the syntactical data in the Perseus Ancient Greek and Latin Treebanks.

ANNIS is an open source, versatile web browser-based search and visualization architecture for complex multilevel linguistic corpora with diverse types of annotation.

The Perseus Ancient Greek Dependency Treebank includes the entirety of Homer’s Iliad and Odyssey; Sophocles’ Ajax, Antigone, Electra, Oedipus Tyrannus and Trachinae; Plato’s Euthyphro; and all of the works of Hesiod and Aeschylus – a total of 354,529 words. The Perseus Latin Dependency Treebank includes 53,143 words from eight authors, including Caesar, Cicero, Jerome, Ovid, Petronius, Propertius, Sallust and Vergil.

Posted in Treebanks | Tagged , , | 2 Comments

Marie-Claire Beaulieu Named Associate Editor of the Perseus Digital Library

We are pleased to announce that Marie-Claire Beaulieu, Assistant Professor of Classics, has been named Associate Editor of the Perseus Digital Library.

Hired in 2010, Professor Beaulieu has research interests in Greek religion and myth, epigraphy, and medieval Latin. Upon her arrival at Tufts, Professor Beaulieu immediately used digital technology to give undergraduates and MA students opportunities to make contributions and to conduct research, thus advancing intellectual culture where learning and research are intermeshed and reinforce each other. The thirteen undergraduate and graduate students—plus one community auditor—enrolled in her “Medieval Latin” class in Winter 2011 deciphered, translated, and annotated rare Latin manuscripts and early printed book leaves from “The Tisch Library Miscellany.” Their work was published online on the Tisch Library Miscellany page and has made a significant contribution to scholarship in the field by providing a valuable resource for professors, librarians, and scholars at Tufts and around the world, who can now access translations and annotations for their teaching and research.

Her work with the Tisch Library led her to receive both an internal Tufts award and an NEH Startup Grant to generalize the infrastructure needed to support—and manage over time—the increasingly varied and complex contributions of students at Tufts and elsewhere. These contributions include editorial work, translations, commentaries, and annotations. In addition, Prof. Beaulieu is working on ways to enhance the teaching of Greek mythology and religion with the use of digital technology.

Perseus staff and collaborators enthusiastically welcome her aboard!

Posted in Announcement, General | Comments Off on Marie-Claire Beaulieu Named Associate Editor of the Perseus Digital Library

Suggestions for new Greek, Latin texts? English translations?

[Please repost!]

We are preparing for a new set of texts to be entered by the data entry firm with which we work (http://www.digitaldividedata.org/). The next order will be sent in mid December but a more substantial order will be placed early in 2013.

What would you like to see added to the Perseus Digital Library, both for use within the Perseus site and for download as TEI XML under a Creative Commons license? Note that we only enter materials that are in the public domain and that can be freely redistributed for re-use by others.

Some possibilities — but please suggest other things that you find important!

* Scholia of Greek and Latin authors.

* Collections of fragmentary authors

* Sources from later antiquity (esp. Christian sources)

* More English translations

Please think about (1) individual authors and texts and (2) what you would want to see if we could do something big.

If you have individual suggestions, please write gcrane2008@gmail.com. A public discussion via the Digital Classicist would probably be the best.

Let us know what you want!

Posted in Suggestions | Comments Off on Suggestions for new Greek, Latin texts? English translations?

Annotation Service Beta

Announcing the beta availability of the Tufts Syntactic Annotation service. This service provides RESTful and Service Layer APIs for requesting (a) syntactic annotations from supported annotation repositories and (b) templates for creation of syntactic annotations of passages and texts. The beta instance at http://sosol.perseus.tufts.edu/bsp/annotationservice is currently configured to provide access to the Alpheios repository of the Perseus Ancient Greek and Latin treebank annotations. These annotations can be identified and retrieved by CTS urn. The service can also retrieve text for which to create annotation templates from any CTS-API compliant repository, and can be configured to access other annotation and text repositories. The Syntactic Annotation service leverages the Salt-n-Pepper Framework  for converting between annotation formats. The current release supports the Perseus import format and Perseus and PAULA export formats.

Currently available instances of the Bamboo Services Platform are unsecured, are operated with no explicit SLA, and should be considered stateless: that is, data may be wiped from persistent stores at any time.

Funding for the development of this service was provided by Tufts University, Project Bamboo and the Andrew W. Mellon Foundation.

Posted in Release, Technology | Tagged | Comments Off on Annotation Service Beta

Morphology Service Beta

Announcing the beta availability of service-based access to the morphological engines used by Perseus for Latin and Greek (Morpheus) and Arabic (Buckwalter). This service leverages a standard Morphology Service API and is made available on an instance of the Bamboo Services Platform at http://services.perseids.org/bsp/morphologyservice/analysis/word

Examples:

http://services.perseids.org/bsp/morphologyservice/analysis/word?lang=lat&engine=morpheuslat&word=mare

http://services.perseids.org/bsp/morphologyservice/analysis/word?lang=grc&engine=morpheusgrc&word=ἱστορίης

The Morphological Analysis Service responds to requests for morphological analysis of texts, submits them to the appropriate morphology engine for processing and returns the results in XML adhering to a standard morphology schema.  The Service supports retrieval of texts for analysis from remote repositories as well as user-supplied chunks of text.  URL based and CTS repositories are supported.  Where retrieval from a CTS enabled repository is requested CTS URNs are supported as document identifiers. Where retrieval from a URL based repository is requested, URIs are supported as document identifiers.

Currently available instances of the Bamboo Services Platform are insecure, are operated with no explicit SLA, and should be considered stateless: that is, data may be wiped from persistent stores at any time.

A secure instance of the BSP for which data will be preserved on future upgrades is anticipated in Fall 2012.

Funding for the development of this service was provided by Tufts University, Project Bamboo and the Andrew W. Mellon Foundation.

note: updated links LMC

Posted in Release, Technology | Comments Off on Morphology Service Beta

Help the SDL!

Advance our understanding of the Greco-Roman World!
Contribute to the Scaife Digital Library — improve existing materials and to create new ones!

If you want to understand the present and invent the future then FREE THE PAST!

Everyone can make a difference! Read the sources more closely yourself, learn something for your own enjoyment and at the same time enhance what is available to everyone else!

Translate that text! Only a fraction of billions of words surviving Greek and Latin — no more than 5 or 10% — has been translated into modern languages such as English. The idea of Europe was invented in Greek and then in Latin and developed over thousands of years — but currently less than ⅓ of 1 percent of all US college graduates enroll in a class of Greek or Latin. Help the other 99.6% understand much of what made Europe and the Americas so that they can help decide where they want to go! MARIE-CLAIRE, Suda Online, HMT

What does that word really mean? We have been studying Greek and Latin but how well do we really understand either the languages or the cultures that they represent? Open content lexica and grammars for Classical Greek, Latin, and Arabic are available, along with commentaries with thousands of notes on Classical Greek and Latin texts. And wholly new instruments such as parallel corpora that align source texts with modern translations allow us to detect patterns of meaning in ways that were never before possible. You can help and learn more about Greek, Latin and thousands of years of cultural history as you do it.

Who/what/where is that? Wikipedia provides broad coverage for many ancient topics but the reference works available in the SDL contain direct citations into the primary sources that general resources such as Wikpedia often leave out. The Smith Biographical and Geographical Dictionaries contain fundamental information about more than 30,000 people and places. All of these reference works contain written citations to the primary sources upon which they are based. Many of these citations have been converted to machine actionable links but the automated programs are never perfect. Check the citations that are there — how do they relate to the text as it stands? What citations were missed — you might be surprised at the range of sources that we do have. And are there important sources that should be cited?

Fix that text! Help us add more text — not only can you correct errors in OCR-generated text in front of you but you can create training data that can improve the performance of OCR on millions of words in a similar font or genre! And correcting OCR output allows you to learn how to type Classical Greek accurately and quickly!

Decode that manuscript! OCR software can do a lot with well printed books but it can’t do much with manuscripts or inscriptions, papyri or even early printed books. Adopt a text, read it carefully, share what you find and make that document visible in a digital word centuries or thousands of years after chisel met stone or stylus met parchment! Start with something simple — and then see if you can become a palaeographer and decode the short hand of writers who lived in world long vanished.

More carefully structured text! Digital texts don’t just use italics and bold — they can precisely describe their contents, making them easier for readers to understand and supporting more sophisticated forms of analysis. Join the Athenaeus Project to see what ancient audiences had read and to trace a network that connects tens of thousands of passages from almost a thousand years of Greek. Become a scholarly CSI force and help reconstruct sources that survive only because they are quoted, paraphrased or mentioned!

Where is that place? Is that Alexandria the one in Virginia, the city in Egypt — or one of many other cities that Alexander planted around the Middle East? Help us generate accurate maps of the places in our sources to help others understand what they are reading and to support new ways of understanding how ancient authors conceptualized their world!

Who is that person? Which Caesar is that, anyway? Is that the famous Cleopatra or one of her many namesakes? And help us distinguish the several Alexanders of Macedon from their descendant, the famous conqueror!
What does that word mean? Help reinvent our understanding of Greek and Latin! You can tell us which word sense in one many many dictionaries a particular passage intends us to understand — and you can learn a lot about Greek and Latin! Or you can help line up Greek and Latin words with their corresponding words in modern translations — and you can help not only read more closely but build parallel texts, one of the most important tools in the modern arsenal to help research and to support the next reader!

What is going on in that sentence? Students of Greek and Latin since Cicero and Erasmus have quailed as their teachers asked them “what is that form? and on what does it depend?” You can record your answers for generations of readers to come – and you can contribute to Greek and Latin Treebanks — our closest equivalent to the Genomic databases of Biology!

Posted in Contribution, Scaife Viewer | Comments Off on Help the SDL!

Digital Humanities in the Classroom – Technical Approach to Platform Integration

Bridget Almas, The Perseus Project, Tufts University
Professor Marie-Claire Beaulieu, Tufts University
July 25, 2012

SoSOL and CITE are two separate frameworks, developed independently, for working with digital representations of ancient sources. They each approach the problem set from different directions, resulting in little overlap between what the two offer, and a great deal of potential for integration.

The SoSOL platform was designed to provide support for the collaborative editing of the different types of XML data being integrated from multiple sources under the Papyri.info platform. Supported data types include transcriptions, translations, metadata, commentary and bibliographies, each adhering to the TEI/EpiDoc schema, but with different conventions and restrictions applied. Publications made up of one or more of these data types are guided through an editing lifecyle by a workflow engine built on top of a git repository. Support for a simple role-based user model is provided, leveraging the OpenID specification by delegating authentication to Social Identity Providers. Editors can search a catalog of pre-established publication identifiers to select items to edit, or can create their publication. Each user works on the publications in their own clone of the underlying git source repository until they are ready to submit a revised publication for approval, at which point their submissions are passed to an editorial board for review, and can either be returned to the editor for further work and corrections, or finalized and updated in the master branch of the repository.

The CITE (Collections, Indexes, and Texts, with Extensions) architecture provides a framework both for digitizing textual sources and for creating mappings between those sources and their digital facsimiles on the level of the citation. It consists of technology-independent but machine-actionable URN schemas for canonical citation, APIs for network services that identify and retrieve objects identified by canonical URN , and implementations of those APIs on a variety of platforms . This architecture was developed by the Center for Hellenic Studies (CHS) in part to enable the work of the Homer Multitext Project (HMT). In developing the architecture, the CHS team intended to to support a wide range of ancient source material in addition to manuscripts, and with the CTS (Canonical Text Services) URN syntax we are able to express in a single identifier both the position of the work in a FRBR-like hierarchy, and the position of a node or continuous range of nodes within a work. The CITE URN syntax applies the same theory to non-document objects, and supports a citation scheme for images, enabling, in a single identifier, identification of both the image itself and specific coordinates on that image.

We have several separate but related needs driving our work on integrating these two platforms at Perseus. Most of our work focuses on the first two of these with a view to supporting the third and fourth goals in subsequent work.

  1. To support collaborative work by students, along the model of the HMT project, thus allowing students to conduct substantive linguistic research with a tangible outcome, the publication of a digital edition of their work.
  2. To work not only with inscriptions and papyri but with more general textual sources, such as the Greek, Latin, and Arabic collections in the Perseus Digital Library, for which subsets of the TEI Guidelines such as the TEI-Analytics subset (being developed by the Abbott Project) are more suitable.
  3. To support work on a growing range of historical sources in multiple formats and languages. These include more than 1,200 medieval manuscripts for which the Walter Art Gallery (250 MSS) and the Swiss e-codices project (900 MSS) have published high resolution scans under a Creative Commons license.
  4. To support a large and international community of digital editors, including students, advanced researchers and citizen scholars. The spring 2012 user base for the Perseus Digital Library exceeded 300,000 users, with c. 10% (30,000), working directly with Greek and Latin sources. The 90-9-1 rule predicts that 9% of an online community will contribute occasionally and 1% will make the majority of new contributions. This would imply active communities of 30,000 for Perseus as a whole and 3,000 for the Greek and Latin collections.

Professor Beaulieu’s project to engage students in work on ancient funerary inscriptions provides an excellent opportunity to explore this work. The job of mapping her collection of images to transcriptions in order to produce digital editions leveraging those mapping parallels in many ways the work of the HMT project and is a good fit for the CITE services and APIs. In addition, the TEI-based Epidoc XML standard to be used for digitizing the inscriptions is already well-supported by the SoSOL platform. We are able to reuse large parts of the XML validation and display code from the papyri publication support on SoSOL while focusing on the addition of support for the CTS identifiers. This incremental approach allows us to lay the groundwork for the eventual support of the full collection of Perseus texts integration while at the same time producing something more immediately applicable and available for use by a smaller, controlled community of students who can effectively serve as Beta testers for the platform.

In keeping with agile development methodologies, we are taking an iterative approach to the integration. We started with the following code bases:

  1. a forked clone of the git repository of the SoSOL platform’s JRuby code base
  2. the Groovy/Java/Google App Engine reference implementation of CTS and CITE APIs from the HMT Project

The first deliverable was to create a prototype implementation that re-used the existing SoSOL code for Epidoc transcriptions almost in its entirely by sub-classing it and changing only the structure of the document identifiers to correspond more closely to the CTS URN syntax. We also substituted a CTS text inventory for the Papyri.info catalog. Coding the prototype gave us a means to explore the design of the SoSOL platform’s code and assess its viability for reuse. The concrete deliverable of a working user-interface gave Professors Beaulieu and Crane a means to explore the viability from the perspective of the user (both student and reviewer).

The next step was to analyze whether we could also extend this work to support the larger Perseus corpus, which will be using the TEI-Analytics XML schema instead of Epidoc, and for which we will need to support collaborative editing not only at the level of the entire text but also at the level of a citation or passage. The latter leverages the CTS API heavily. However, as CTS is a read-only API, we needed to develop a set of parallel write/update/delete functionality that could be used to update and create new editions of CTS-compatible texts. To experiment with this, we augmented the XQuery based implementation of the CTS APIs from the Alpheios project, which was written by the developer working on this project. We also coded prototypes of additional extensions to the SoSOL code to work with texts and passages that use the TEI-A XML schema rather than Epidoc, and to present a passage selection interface.

Completing these two deliverables gave us confidence that the integration was in fact viable, and funding as an NEH startup project enables us to move the work beyond the prototype stage to actual implementation.

Through the work on the prototype, we were able to identify some key interoperability challenges for the two platforms.

For SoSOL this has centered around identifying and isolating the Papyri-specific assumptions of the platform. These have primarily been in the following areas:

  • identifier scheme
  • cataloging system
  • stylesheets for display
  • differing concepts of what makes up a “Publication”

For CTS the primary integration challenge so far has been in augmenting it with a compatible Create/Update/Delete system.

The challenges also include the need to identify or define a canonical citation scheme for the inscriptions, although this is not specifically a platform integration issue but instead a more general one related to the creation of digital editions.

The first deliverable of the implementation stage of the project was to integrate the prototype code with the master branch of the SoSOL repository that had continued to evolve during our protoyping efforts, and with which our forked clone was now out of sync. Through this process, we were able to both take advantage of various enhancements made to the SoSOL code in the interim and reduce the amount of changes necessary to the main code base to support the new data and identifier types. This process also required some significant rewriting of the prototype code, but this was not surprising as the creation of production quality code was not the main objective of the prototype. We are now working on a branch of the master SoSOL repository, rather than a fork, and expect to be able to integrate the branched code back into the master branch fairly soon.

Once the above process was completed, the next deliverable was to deploy the SoSOL and CTS services on a Perseus server with a functioning interface that Professor Beaulieu and her assistants could use to select an inscription upon which to work and then enter the XML for the transcription, translation and commentary of that inscription. This deliverable has been fulfilled and they have been able to complete creation of a digital transcription and translation of the Nedymos epigram through the SoSOL interface.

Although initially we had also planned to include integration with the ImageJ tool in this iteration, the development in the meantime by the HMT of a superior web-based Image Citation tool for working with the images, along with the expanding adoption of the Open Annotation Core (OAC) Data Model specification for annotations, has led us to change course on that part of the design. We have begun the work of integrating the Image Citation tool into the SoSOL interface, and it can now be used from within this interface to select a region of interest on an image and create a CITE URN for that selection when editing or viewing the transcription. We are currently using a shared Google Drive spreadsheet to record these urns, and the corresponding CTS urns for the mapped text, in an index. The next step will be for the SoSOL tool to automatically record and store these mappings as annotations on the text in the form of OAC RDF triples.

Deploying and using the SoSOL interface for this inscription has enabled us to better understand the actual workflow we will need to support for the work on the inscriptions, and uncovered some differences between this workflow and the one currently supported by the SoSOL platform for the Papyrological work. Among other things, we have identified the need to make some decisions about how we want to handle the commentary and bibliography for the inscriptions, and we have also recognized the need for some design changes to the interface introduced by the CTS approach of keeping the translations in separate documents from the source editions. These changes will be included in the next iteration, during which we will also begin to work on adding support for storing image to text mappings as OAC annotations and continue to move forward with the support for TEI-Analytics and citation-based editing that will be required for the larger Perseus corpus.

Having used these tools to produce the XML and image mapping data for the Nedymos inscription, we are now also able to begin scoping the requirements for the eventual display of the digital edition. We have used the Groovy based reference implementation of a facsimile browser from the HMT project and the Alpheios browser plugins to experiment with the options and to produce screenshots through which we are able to review and discuss the requirements in a concrete way. In the next iteration we will decide upon an implementation approach for the display code and for supporting automatic integration of the display and editing environments.

Posted in Digital Humanities | 1 Comment