Image 01

Archive for September, 2009

Google Books: Department of Justice and ReCaptcha

Monday, September 21st, 2009

Department of Justice Brief Filed on Google Books Settlement

Not surprisingly there has been a lot of activity around the Google Books settlement right around the deadlines for filing. The US Department of Justice filed a brief (PDF) (link via searchengineland) with the court on Friday. DOJ objects to the settlement as proposed on several grounds, while recognizing the cultural significance of what Google is working on. The objections are: 1) the scope and value of the settlement are public interests also, and may not be appropriately settled by a private lawsuit–really, Congress should be drafting legislation to do this, 2) the structure of the proposed Book Rights Registry may create a situation which makes it impossible for anyone else to compete in the new market, 3) not all of the interested parties may have been adequately notified of the settlement and possible changes to their rights, 4) it would be more appropriate for Google to have rights-holders opt in to the agreement rather than the current arrangement, which assumes consent. More detailed summaries at the New York Times and Search Engine Land.

Significantly, DOJ is not rejecting the deal outright, and is apparently working with the parties to modify the agreement. A hearing is scheduled for October 7th.

Improving OCR

As an example of how quickly things can change, while I was composing my book-length post on problems with optical character recognition, Google was buying a company which has been working on that problem in a really innovative way. Captchas are the odd squiggly text websites force you to log in with in order to screen out spam bots. ReCaptcha takes advantage of this common mechanism to proofread troublesome documents. Instead of one word, users are offered two. The first word is known to be correct, the second is one that was flagged as questionable from an online archive of texts. If you get the first one right your reading of the second one is likely to be right, too, and that data can be fed back to improve both the source text and the OCR software. The technology has already been used to improve the New York Times historical archive. Google announcement (via CNET). How ReCaptcha works.
I suspect it will be a while before Gothic typeface German or Ancient Greek will be prioritized, though since the ReCaptcha technology is designed to be installed on a variety of different websites there’s no reason, for example, specialized web communities like H-Net or Voice of the Shuttle couldn’t install it and let their expert users contribute their expertise.

The Book, Terms of Service

Thursday, September 10th, 2009

A nice thought experiment in what the licensing terms for a book would be, if it were spelled out in the terms used for computer software, music, and movies. (via librarian.net).

Example:

I. Privacy
What takes place in the exchange between your brain and the contents of The Book is your exclusive private concern. The Book will never download the contents of your brain, either whole or in part.

Google Books Practicalities (Part II)

Tuesday, September 8th, 2009

google-books.png(from a Greek/Latin New Testament on Google Books)

Professionally speaking I have mixed feelings about the Google Books project, for reasons which I will try to explain below. It has the potential to completely change how research in the Humanities is done. I’m not given to hyperbole, simply saying that a full-text searchable database of book-length texts the size of a research library (much less two dozen) has never existed, and there are all sorts of possible unintended consequences. Most of them are good for scholarship. The proposed settlement was a surprise–I had been placidly assuming that the lawsuits would go on forever–and I quite literally spent about forty-five minutes staring at my computer screen the day last fall when it was announced trying to figure out what the ramifications were. This series of posts contains some of my thoughts, and I’ll share more as I read more.

What Google Books Does Well

Full text search. It provides searchable access to a vast quantity of published literature. Library catalogs are built to facilitate browsing. Books are described in general terms, and once you’re in the right spot on the shelf you’re surrounded by related materials. This is exactly what you need for some projects. But for other projects, ones where you’re looking for an obscure fact or name, or (and this is a library school classic) identify the original version of a particular quote or phrase, it doesn’t work as well. Going that extra step requires you to use the index and table of contents of the book. It’s entirely possible to flip through an entire shelf of books in a few minutes if all you’re looking for is references to a person or idea…but standard indexing never covers every word or concept mentioned. A computerized index (ideally at least, see below) does. A computer can search thousands or millions of items in the time it takes you to open one.

Scale. There are ebook collections which can be searched in bulk, but most built for scholarly purposes are fairly small. Even Early English Books Online, a massive collection, includes only about 125,000 items. Google is scanning the entire contents of dozens of research libraries worldwide: books, journals, everything. Current estimates are that ten million items have been scanned so far.

Scope. Modern research has been becoming more and more interdisciplinary for years. In contrast, most ebook collections, like the 1700 titles in ACLS Humanities, are small and fairly narrowly scoped.

What Google Books Does Poorly

Again, I’ll refer you to the article by Geoff Nunberg I mentioned in my previous post. He eloquently makes the case for metadata, which is to say that it really does matter how well you can describe the book and relate it to others.

Typefaces and Languages
The search index is generated based on Optical Character Recognition (OCR) run on the PDF images of the texts produced by scanners. OCR works best on clean texts with modern typefaces. Old books fare less well. Books in foreign languages (especially non-Roman alphabets) fare less well. Old books in foreign languages are very, very hard to do. Here are a couple of extreme examples. Google gets credit for providing access to the raw text of the search index, which most traditional library vendors do not.

The process is similar in concept to what JSTOR and ACLS do, but because JSTOR (about 1500 journals) and ACLS (about 1700 books) cover vastly smaller collections of texts their descriptive and cataloging data tends to be of much higher quality. From conversations with JSTOR reps at conferences they are aware of the problem I am about to describe, but none of the database aggregators and vendors works particularly well with this kind of material. The only solution on a large scale is Project Gutenberg‘s collaborative proofreading model.

From Google Books:
A Greek-Latin parallel text New Testament (1821)
what the page looks like
what the search engine sees
Comment: I love several things about this, while being really sympathetic to how hard it is to get a machine to make any sense of what’s going on here. The first is that, as is not uncommon, the introduction to this Greek-Latin parallel New Testament is in Latin. As is half the text. Yet the language that seems to have been used for OCR is Greek. So all the Latin gets processed into Greek gibberish (as is most of the Greek), and the division of the page is not noted by the computer interpreter at all.
Liddell and Scott Greek Dictionary (1848)
what the page looks like
what the search engine sees

Lewis and Short’s Elementary Latin Dictionary
(1894)
what the page looks like
what the search engine sees

Comment: This is almost usable, though it’s not a completely reliable version. Latin seems to work better, but italics and all the other tricks typesetters use to make a huge body of text legible to humans really aren’t to a computer.

Immanuel Kant’s Gesammelte Schriften, volume 5
what the page looks like
what the search engine sees

Comment: This is, to borrow Nunberg’s phrase, a complete train wreck. For practical purposes it’s not searchable…and it’s the canonical version of Kant’s complete works.

Things That Reassure Me I Will Continue To Have A Job

1) Optical Character Recognition cannot do everything yet. In particular, until it can reliably search a Latin text from the 19th century or prior, much less anything in Greek or in German Gothic type, it is not yet a replacement for the tools traditionally in use.

2) Google does not know that volume five of Kant’s Collected Works is related to anything else in the eight volume edition Tufts has. So if I wanted the Critique of Pure Reason, volume 3, Google thinks I will be able to find that by searching for it. Except #1.

3) The settlement only applies to out of print books. Current editions will still only be searchable on whatever terms Google agrees to with publishers. The library will still be your most convenient free source.

And yet, like Nunberg, I am optimistic. There are explicit terms in the agreement which allow Google to make the data available for research and study. Once the huge collection of data exists it will be possible to make it better.
Questions or concerns about Google Books or the future of libraries? Ask in the comments.

Google Books Settlement (Part I)

Friday, September 4th, 2009

Image of digital scanner Today (September 4th, 2009) is the last day for authors to opt out of the proposed class action settlement between Google and publishers concerning the Google Books project. Since the settlement was announced last fall there has been a blizzard of press coverage and arguments for and against the settlement. Part I is an introduction to the project, the players, and what’s under discussion.

The Story So Far
Starting in 2004 Google, in cooperation with several libraries worldwide, including Stanford, the University of Michigan, Oxford, Harvard, and the New York Public Library, began the largest digitization project in history. The goal was to comprehensively scan and make available online the collections of these major research institutions. Google claimed that “fair use” allowed it to make copies of the millions of works involved as long as it displayed no more than brief excerpts online. Publishers disagreed, claiming that the act of making the copies was itself copyright infringement on a massive scale, and sued Google in 2005. While the case was pending a number of other libraries have joined the project, including the Bibliothèque Nationale, the Bavarian State Library, and the National Library of Catalonia. A proposed settlement was announced last fall, with a time limit for objections set by the court overseeing the case.

The Settlement
The settlement would set up a Book Rights Registry to manage royalties from the sale of digital books and advertising. Current publications and pre-1923 publications would be handled the same way they are now. A major problem with past digitization efforts has been determining who owns the copyright to large numbers of “orphan works”, those whose copyright holders cannot easily (or at all) be identified. The settlement would hold royalties in trust where copyright is unclear, and therefore provide an incentive for publishers to claim their orphan works.
Objections to the settlement largely center on monopoly power, pricing, and privacy. Robert Darnton of Harvard started the discussion of serious objections to the settlement in an article in the New York Review of Books in February. While the settlement is not an exclusive one, the path Google has taken is not one open to many others. Anyone wanting to replicate the Google Books project would have to begin scanning books, get sued by every author and publisher on the planet, and come to a settlement. Google could also abuse its monopoly position by raising prices to whatever level it wanted. Objections by the Internet Archive and Amazon say the way to do this is to change the law to make it possible, not use a court case to completely change how copyright law works to the benefit of one company. The FTC and the American Library Association have expressed concerns about the privacy implications of such a large digital library.

A second set of concerns, not related to the legalities, are concerns about the quality of the data produced by Google’s massive and rapid scanning (almost 10 million books in five years). A recent article in the Chronicle of Higher Education by Geoff Nunberg of UC-Berkeley points out a range of problems which need to be addressed to make the Google library useful for scholarship.

Additional overviews
Tome Raider (brief). By the Economist. Good coverage of the European angle, which is important but not being discussed in US publications as much, because the settlement would apply to US publishers only.
Google’s Moon Shot (lengthy). By Jeffrey Toobin, in the New Yorker.