This paper was refereed by the Journal of Electronic Publishing’s peer reviewers.

Abstract

This paper looks at the reality of implementation of e-commerce standards in the book and journal supply chains, and at where the barriers are to more widespread implementation. It compares this with the situation in other media, and looks at some of the challenges of convergence and divergence. Although the challenges identified are considerable, it finishes by discussing why there may be reasons for optimism about the future.

The Challenge of History

It is a rather obvious truism to say that industries are shaped by their histories. However, at a time when media industries are undergoing unprecedented change in their business and cultural environment, the legacy of those industries is particularly challenging. Publishers of all types, alongside the music and movie industries, TV and games, all the different sectors of what are often called the “copyright industries,” are adjusting their business strategies in the light of the “switch to digital” (with varying degrees of success). However, many continue to struggle with some very significant problems in implementing those strategies because of underlying shortcomings in their systems and processes.

The overall challenge was neatly summed up by the CEO of Red Hat, Jim Whitehurst. Speaking at the CED’s Venture conference in 2010 he said: [1]

“Every industry has built brittle, optimized systems around the way the world was 20 years ago when those companies were originally successful.”

He was not speaking specifically about the copyright industries—it was a much more generalized observation than that—but frankly he might have been. It is taking a long time for systems in the media to catch up with the real world even as it is today. And today, of course, is only the beginning of the story.

This article will look at one part of this picture—the development and implementation of communication standards in support of e-commerce: essentially, identifiers and metadata. The emphasis will be on the publishing industry, but it will also look at some other parts of the content industry and consider the challenges that face us in managing convergence in the near future.

The Beginning of Standards for e-Commerce

Look back to the mid-1980s, and the well-known dictum “The nice thing about standards is that there are so many of them to choose from”[2] could not justifiably have been applied to the specialist standards that had been developed for the book and journal supply chains. In fact, specialist e-commerce standards were thin on the ground. There were specialized EDI (electronic data interchange) messages[3] in both Tradacoms and X12 formats (depending on the part of the world in which you operated), and these were joined (a little later) by EDIFACT messages to meet the same set of requirements.[4] In the journals field, the ICEDIS committee (the International Committee for EDI in Serials)[5] had a suite of fixed-format “tape” standards for the communication of renewals between subscription agents and publishers, which were very widely implemented around the world.

And book publishers, of course, had the ISBN, a standard that had its roots in the 1960s, officially standardized in 1971, and arguably the most successful product identifier ever devised —there are now ISBN agencies in over 160 countries around the globe and few books enter the supply chain anywhere without an ISBN. The ISSN followed in 1975—and although strictly speaking not a standard for commerce, it has been widely used for that purpose ever since.

There were no commercial standards for the exchange of metadata (indeed, it is doubtful that you would have found anyone in the book or journal industries who would have known what metadata was). The information necessary for “books in print” publications was communicated on paper. Only in libraries was there any standard for the communication of descriptive information, and this was for a very different purpose—MARC had been devised in the 1960s for the electronic communication of library catalogue cards.

And note that all these standards are still in use in 2011. This may perhaps be seen as a tribute to their robust construction but is also a reflection of a major challenge facing us today. These standards are still in use in a landscape that is profoundly different from the one in which we operated in the decades in which they were devised.

The standards were devised at a time when computing power, storage, and communication were unimaginably more expensive than they are today. Data formats were developed for economy, with limited and fixed field sizes.

The Standards Explosion

Then, starting in the late 1980s, but really hitting in the 1990s, the number of standards in use began to multiply—as indeed did the number of standards organizations. Not only mega-organizations like W3C (on whose standards we all depend) but many specialist organizations—including, in the international publishing space, the International DOI Foundation and particularly CrossRef (the DOI agency that dominates identification in the academic publishing sector), and EDItEUR. There are also a large number of national organizations, including (in the U.S.) NISO and BISG and (in the U.K.) BIC.

This explosion of standards across what can be broadly characterized as the media space—including libraries, archives, education and training, and all parts of the commercial media —has continued unabated over the last decade, to the point where we, even standards specialists, have real trouble keeping up with the incomprehensible maelstrom of acronyms. This becomes even more complex when you have to consider the standards for content formatting (like the NLM standards and EPUB) but these are beyond the scope of this paper.

The driver for the development of these standards has primarily been the increasing influence of the ubiquitous network—or rather of the machine-to-machine communication that this network has enabled. Today, it is becoming hard to remember the days of Value Added Networks, where every message was a measurable expense. The Internet has become the carrier, and communication standards are now (for the most part) expressed in XML.

In the book trade, the most obvious example is ONIX for Books. Development of ONIX[6] began at the Association of American Publishers (AAP) in the late 1990s, as publishers recognized the extent to which Internet retailing of (print) books was going to change the landscape of the business. A mechanism had to be developed for the more effective communication of “rich product metadata”—what I tend to describe as “anything you might find describing a book on an Amazon page.” The importance of metadata for selling books became widely recognized.

EDItEUR took responsibility for the management and development of this standard in 2001, and we now have 17 “national groups” around the world who contribute to the governance of ONIX for Books—with the most recent additions being Japan and China. We are also expecting the development of an Arabic language group.

During the last decade, EDItEUR (in response to member requirements) developed a number of additional standards: a family of ONIX for Serials messages (jointly developed with NISO); a family of messages for the communication of rights and licensing information, including ONIX for Publication Licences and ONIX for IFRRO (jointly with the International Federation of Reproduction Rights Organisations); and EDItX XML EDI messages (proposed as replacements for and extensions of the EDI messages of an earlier era).

However, implementation of all these messages is patchy, an issue we will explore further.

About four years ago, again at the request of our members for a number of improvements, and particularly for more flexibility in the description of e-books, we began the development of ONIX for Books 3.0. This was published in April 2009, but implementation of this specification has again been very slow (although finally beginning to accelerate now).

What accounts for the sharp differences between the successful implementation of some standards, and the slow implementation of others?

The Power of Incumbency

Of course, part of the explanation is easy—time. The standards that are today ubiquitous were once new, and implementation seemed slow.

The implementation of standards is always driven by the same fundamental objective—to save costs (although interestingly the implementation of ONIX for Books may equally be driven by the imperative to sell more books). Getting machines to speak the same language reduces the cost of communicating—it is still horrifying to me how much data is rekeyed, often more than once.

However, many of the savings that standards have brought to commercial transactions have already been made—indeed the additional costs that would be associated with the loss of EDI standards are unimaginable. Many of these standards remain fit for purpose—or at least very nearly so. And it is only the gap between what you can do with an existing standard and what a new one can do for you that represents the potential return on the investment in implementing the new standard.

This problem is exacerbated by two other factors in the implementation of supply chain standards. The first is network effects. New or revised standards suffer from the “single telephone” problem—there is no value in having a telephone unless someone else has a telephone and you can call them. But the real value of a telephone is when they become ubiquitous and many people have them. This is the reason that we never charge for using EDItEUR standards—it is very much in our members’ interests if everyone adopts our standards; the value to everyone increases with each new user.

But there is an additional problem. In supply chains, the costs and benefits of standards implementation are often unevenly distributed. Sure, it is in everyone’s interest if the efficiency of the supply chain is improved—but what if the cost of creating that efficiency is mine and the benefit is yours?

So many standards that are no longer entirely fit for purpose remain stubbornly in place, while the implementation of new ones—and some not so new, although these were designed to respond to a recognized need—remains an uphill struggle.

Managing Complexity

One criticism that is frequently leveled at standards organizations is that standards are becoming too complex, as if in some way we deliberately make things too complicated and expensive for implementation in the “real world.” It has become popular, particularly in some U.S. book publishing circles, to talk about “metadata bloat”—the implication being that metadata is in some way an alien life form, disconnected from the real business of publishing.

The reality is rather different. What has happened in the last two decades is that the business of managing books and journals has become a great deal more complex—and the metadata necessary to describe this complexity is simply a mirror. This isn’t simply the case for publishers—across the media, business has become more complex as products have broken away from the physical constraints of the pre-Internet world (while often continuing to occupy that physical world as well).

However, many of our systems are still optimized for the management of a world that is now passing by quickly. Publishers are trying to manage “digital” as an adjunct to their physical businesses—and unsurprisingly finding that this is very difficult. Whereas once you might have had two or, at the most, three different products for the same title, now there are many more[7]—and that is before you start to think about fragmentation of content.

Publishers’ systems struggle to manage this complexity adequately. Many publishers are managing e-book metadata entirely separately from their print book metadata, in silo applications (with the creation and even maintenance of e-book metadata often not done by the publisher at all, but delegated to a supplier—or, worse, suppliers). Little wonder that those whose job it is to manage metadata “at the sharp end” complain that metadata is growing out of control.

Tools for managing metadata are struggling to keep up—but even where the tools are adequate, the funding for investment is simply not available.

It is, of course, not only metadata that is a challenge here—publishers are also learning to manage their content much more effectively than they have done in the past. The intimate relationship between digital asset management and metadata management implies disruptive changes to well-established workflows and responsibilities in publishing houses; but this is a topic that goes beyond the scope of this paper.

The Skills Gap

The other challenge of which we are continually aware is the shortage of technical skills available to publishers and others in the supply chain. It is not perhaps surprising that many of the highest-grade XML skills available are focused on products rather than back office. But the extraordinary demand we face from apparently technically sophisticated organizations to provide standards in CSV formats rather than XML—and the apparent complete inability to read an XML specification in their back offices—is testimony to a real problem.

It is noteworthy that, as an international organization, we come across this problem more in the United States than probably anywhere else in the world.

Other Barriers

There isn’t only a skills gap; there is also a real resources gap. Investment in standards always involves upfront costs for a (sometimes uncertain) cost saving or service improvement tomorrow. While everyone may agree that it is desirable to communicate a particular type of information within the supply chain—that there is a real requirement—the willingness and ability to invest in implementation typically lags the identification of that requirement. Too often, standards are developed that everyone agrees it would be “nice to have”—but typically implementation of “nice to have” standards doesn’t happen—it just gets pushed out on the road map every quarter.

Standards often get implemented only when a sufficiently influential trading partner makes it a “cost of doing business.” Right now, many of the new powerful players in the supply chain are not insisting on standards compliance—and sometimes deliberately are following policies of not following standards, as a part of their commercial strategy.

Some recent standards have been implemented very quickly when they have been made a cost of doing business—COUNTER[8] has proved a very good example in the academic sector where libraries rapidly made COUNTER-compliance a contractual obligation—but this has been the exception.

Making It Simpler

How can we make things simpler for our constituencies?

One possible solution is advocated by Peter Brantley at the Internet Archive; this is simply to learn to communicate much less complex information in the model of the “Open Publishing Distribution System” (OPDS), a sort of extended Dublin Core for e-books.[9] The problem with this proposal is that, like Dublin Core, it is designed to work in a much simpler world than is the current reality of commercial e-book publishing. While there might be a place for using OPDS as an adjunct to ONIX for Books, it is deliberately not designed to communicate the richness of data that is asked for in today’s supply chain.

Is it possible that the supply chain could become a lot less complex? Over time, in some ways that seems quite likely. But, for the time being at least, it would be a considerable risk to place bets on the direction that the simplification will take.

However, at the same time, we need to recognize that ONIX for Books poses a serious difficulty in implementation, particularly for smaller organizations (either creators or recipients). ONIX is designed to accommodate the requirements of a very broad range of users in many different markets around the globe. If you view a standard as an agreed language for communication within a self-selected community, the larger and more diverse that community, the more complex that language inevitably becomes (and the more difficult it becomes to communicate unambiguously). There is probably no one who needs to implement the entire ONIX for Books standard, but this very richness means that the standard (and its supporting documentation) is long and complex.In the past, this has certainly led to inconsistent implementation, which clearly detracts from the value of the standard. It is a common complaint that “no two publishers ONIX feeds are identical.”

Furthermore, in the past, ONIX for Books, although an international standard, has had distinct “flavors” in different parts of the world. These have been based on “best practice” guidelines, created by national groups. Sometimes, they have given directly conflicting advice; this hasn’t mattered unduly until the rise of international retailers who need to be able to mix ONIX metadata coming from different countries where the interpretation of the meaning in the message needs to be different.

EDItEUR’s approach to the challenge of getting more consistency into ONIX has been for the first time to develop and publish international best practice guidelines to ONIX for Books 3.0[10]—these are in draft at the time of writing but expected to be released in a v1.0 early in the summer of 2011. The publication of these guidelines is part of a broader push to improve documentation to make implementation less of a burden.

We are also testing two other mechanisms for simplifying implementation. The first is to develop additional, more specialized but individually less complex messages. While this leads to greater proliferation, it has the potential to provide some businesses with an easier route to initial implementation. Of course, this slightly increases our own complexity—but it is getting easier for us to manage our messaging standards within an overall technical framework that ensures we maintain syntactic and semantic integrity while at the same time making the individual message standards themselves less complex.

The other approach that we have adopted in the last couple of years is to insist—before we begin the development of a new or revised specification—on finding members who are willing to undertake pilot implementations. This ensures that we are not spending time and effort on the specification of standards that no one is fully motivated to implement.

Making It More Complicated

However, there are also trends moving in the opposite direction, and one of these is the management of rights. Ultimately, as the distribution of content moves from the physical to the digital environment, the unit of commerce moves from being sale of a product to sale of a license. You cannot sell someone a file, far less can you sell them the work that is recorded in that file. All you can sell them is a set of rights to access and use that file and the work that it contains. This is not the outrageous claim that some people seem to think it is[11]—it is simply a statement of the blindingly obvious. You can, of course, choose to license people to make use of that file as if they owned something (including giving them a right to resell it or to give it away) but that is a completely separate issue. We must acknowledge that in the licensing of digital content, it is difficult to think in terms other than analogies with what exists in the physical world—“lending” e-books is the obvious example. But these are always analogies.

Every content transaction on the network is a rights transaction. It can never be anything else.

And this is a major challenge, because really (outside of the collective management sector) no one in the book and journal supply chains have much in the way of rights and licensing systems—certainly not for the comprehensive management of rights and licenses through the complete lifecycle.

This has been well illustrated by our experience with ONIX-PL (ONIX for Publication Licenses). Libraries have for a long time been swamped by the sheer number of licenses they need to manage the (now overwhelmingly digital) content they acquire each year. ONIX-PL is a standard for encoding these licenses in XML, so that they can be communicated in the supply chain—a requirement recognized by the Electronic Resource Management Initiative of the Digital Library Foundation.[12] Librarians need to have proper access to license terms, and to be able to provide access to license information to library patrons at the point of use.[13]

With the financial support of the JISC[14] and of the Publishers Licensing Society, EDItEUR took the highly unusual step for a standards organization of commissioning specialist software to support the implementation of ONIX-PL. We recognized that no one would have the capability to create XML license expressions, and have published an open source software tool, OPLE.[15] However, adoption of this standard has been very slow, although there are recent signs of increased interest. What’s the reason for this apparently very sluggish take up? It cannot be altogether divorced from the lack of well-integrated systems for the management of this type of information at any point in the supply chain. Publishers have no experience in creating XML license expressions; libraries have no experience ingesting and making use of them. The value has been demonstrated[16] but the lack of system capability is a real challenge.

And this is just the beginning. We have a number of different projects related to rights currently in development. One, being undertaken in collaboration with the Book Industry Study Group’s Rights Committee,[17] is looking at the potential for communicating rights-related royalty information at a business-to-business level—and we expect to publish a draft “strawman” specification for this message within the next two months. However, the sheer complexity and lack of consistency of semantics between different players in the chain looks daunting. There is a real communications “pain point”—but the challenge of relieving the pain will be considerable.

Another area of complication has started to become visible in the last few months—the question of subject coding. Within English Language markets, there has been a fundamental dichotomy over the use of subject coding schemes, with the U.K. (and other countries including Australia) adopting the BIC subject categories (originally developed by Book Industry Communication in the U.K.)[18] and the U.S. and Canada using BISG’s BISAC Subject Headings.[19]

The need for a worldwide mechanism for the management of subject encoding is now becoming higher on everyone’s agenda. Existing practices are well established, but a number of countries are now looking at translating BIC’s scheme on a one-to-one basis (to enable an entirely clean mapping) and suggestions have been made that BIC and BISG should attempt to find a common way forward. This would be very complex, not least because the two different schemes have very different structures and origins—and are anyway continually “shifting targets” as they are constantly refined and updated for their respective markets.

However, it has been suggested by some in the community that any attempt to converge BIC and BISG is too little, too late. Such centrally defined subject coding mechanisms—primarily designed to support merchandising of physical books in bookstores—are insufficiently fine grained and precise for the requirements of online discovery—once online discovery becomes the only discovery mechanism open for selling online content. A completely different (and much more ambitious) project may be needed to develop (or more likely to adopt) taxonomies and ontologies, which will enable really effective discovery. Such techniques have already been pioneered by individual medical and scientific publishers (where there are substantial authoritative ontologies already developed) and it seems we may have to facilitate a move in this direction over the coming years.

Other Media

How has book and journal publishing metadata developed in comparison with other media? There are perhaps some interesting parallels and contrasts to be drawn with the music industry.

Some aspects of rights communication has been much more highly developed in the music industry than in the publishing industry because of the very long established collective management of primary rights in music[20] (something that is currently unknown in the world of publishing, where only some minor secondary rights are traded collectively). The need to manage underlying musical works as well as recordings has led the music rights societies led by CISAC[21] to develop an extremely sophisticated (if closed) metadata infrastructure called CIS Net.[22]

However, at the commercial end of communication, the recorded music industry had no specialist e-commerce standards prior to the launch of Digital Data Exchange (DDEX) in 2006. DDEX messages comprehensively cover a wide set of requirements for communication about digital music, including three main sets of messages.

  • Electronic Release Notification Message Suite (ERN): This supports the communication of information about albums, sound recordings, musical works, and the contracts associated with them—usually sent from a record company or aggregator to a digital retailer.
  • Digital Sales Reporting Message Suite (DSR): This supports the communication of sales and usage information about albums, sound recordings, and musical works and the financial transactions associated with them—usually sent by a digital retailer to record companies and music rights societies or music publishers.
  • Musical Work Licensing Message Suite (MWLI): This supports the communication of information about musical works to enable musical work licensing—usually exchanged between record companies or digital retailers and music rights societies.

There are some parallels between publishing and the recorded music industry (the “record labels”) that are worth noting. In particular, both industries have struggled (and continue to struggle) with attempting to manage digital products with systems whose fundamental design is still oriented toward physical products. Both industries have had serious problems with identifier compliance (in the record industry’s case with the International Standard Recording Code). Implementation of new standards has been frustratingly slow. However, the DDEX standards are finally getting very considerable traction, and the number of implementation licenses has been growing exponentially as the bigger players in the industry begin to make compliance mandatory.

DDEX has also attracted the interest of the movie industry. The audiovisual sector, although very strong in technical content standards, has been late in recognizing the need for specialized e-commerce standards but has recently made a very significant move in the formation of a Digital Object Identifier (DOI) Registration Agency for audiovisual assets, EIDR.[23] This is of course the same technical infrastructure that is used throughout journal publishing (where EIDR’s equivalent is CrossRef).[24] The development of a pervasive unique identification system for AV is an important step along the way toward adopting a wider set of standards for e-commerce.

Convergence

Of course, what this all points to is convergence.

There was a time when trains crossing some national borders had to change bogies because of the lack of standardization of the rail gauge. Currently, we are seeing similar boundaries between the media on the Internet. Here the problem is perhaps even more acute, because the boundaries may be in our minds, our histories, and our industry structures—but they are not of even passing interest to consumers.

Within the publishing industry, we are curiously seeing both divergence and convergence at the same time: divergence, because different sectors of the book publishing industry can increasingly be seen as the different industries that they are (previously united by a single physical product); and convergence, because our online channels to market for digital content are converging with other media at high speed.

We are also seeing a growing need for convergence between commercial and library applications—with recognition that, despite the differences in requirements between commercial metadata and cataloguing, we need to find ways of avoiding unnecessary duplication of effort. There are some hopeful signs in this respect. The Library of Congress is using ONIX for Books as a way of substantially increasing the efficiency of parts of its CIP program;[25] OCLC has developed a mapping tool for managing ONIX to MARC and MARC to ONIX transforms (although it is worth noting that accurate mappings of this kind between schemas with very different structures and purposes are very complex and, without skilled human intervention, are almost inevitably “lossy” ).[26]

Looking a little further forward, there is much in common between FRBR[27] and the <indecs> framework[28] (the abstract analysis that underpins ONIX, DDEX, and all DOI-associated metadata). There have been significant joint efforts on interoperability, led principally by the Vocabulary Mapping Project (VMF).[29]Much remains to be done, but some significant first steps have been taken.

Perhaps the main challenge on convergence is the one that I have already touched on—the management of rights, licensing, and permissions. This is an area where individual parts of the media cannot possibly continue to work in isolated silos. However, there are some hopeful signs of collaboration in a project created and led by the European Publishers Council.[30]

Conclusions

This paper has perhaps been a little downbeat in its tone. We are facing considerable challenges in the standardization of e-commerce, and there would be little point in pretending otherwise. But the indicators on my dashboard are steadily becoming more positive.

  • At EDItEUR, we have increased our membership by over 20% in two years—no mean feat at a time of severe economic stringency, and a real vote of confidence in what we are now doing. So far as we can tell, new members are joining us because they are either upgrading (from ONIX for Books 2.1 to 3.0) or implementing one or more of our standards for the first time. We are seeing a significant uptick in the implementation of ONIX for Books 3.0 and in implementation of some of the ONIX for Serials messages.
  • Bringing both Japan and China into the ONIX for Books community is a major step forward in the internationalization of our efforts—and a monument to the original architects of ONIX, which has proved extremely robust as new requirements are made of it.
  • Agreement around documentation of international best practice will reduce international fragmentation still further.
  • It appears that disagreements between the U.S. and the rest of the world over the implementation of the ISBN for e-book identification may be diminishing (although it would perhaps be optimistic to expect a final agreement in the very near future).
  • There is growing evidence of a willingness to move from the venerable and extremely inflexible fixed-format EDI messages in the direction of a wider-spread acceptance of XML EDI.
  • There is increasing interest in the potential for embedding metadata within the content package itself (as represented by, for example, an EPUB file); the EPUB 3 specification makes reference to the ability to embed an ONIX record within the package (although this is not by any means exclusive). This raises some new challenges in terms of data integrity but also holds out the potential promise of the “self-cataloguing e-book.”
  • There is growing interest in the library space in the potential for use of ONIX.
  • There is growing collaboration between the media on the development of the essential cross-media infrastructure for managing rights, licensing, and permissions on the Internet.
  • Certainly, within EDItEUR, we are not expecting any shortage of things to do over the next few years.


Mark Bide was appointed Executive Director of EDItEUR in January 2009; he remains a Director of Rightscom, the specialist media consultancy where he has worked since 2001. Mark has worked in and around the publishing industry for 40 years, having been a Director of the European subsidiaries of both CBS Publishing and John Wiley & Sons. He is a Visiting Professor of the University of the Arts London.

EDItEUR is the global trade standards for the book and serial supply chains. Based in London, EDItEUR is supported by 90 members in 20 countries. Probably best known for ONIX for Books, the standard for communication of rich product metadata in the book supply chain, which is implemented very extensively in global markets, EDItEUR also develops and maintains a broad range of e-commerce and data standards for both trade and library supply chains. EDItEUR provides management service for the International ISBN Agency, the federation of local ISBN Registration Agencies in 160 countries around the world.

Notes

    1. Quoted in J. Opp Jim Whitehurst: Don’t build a better mousetrap. Change the business model. Applied Poetics Blog (2010). http://appliedpoetics.com/. return to text

    2. Variously ascribed to Andrew S. Tanenbaum and Grace Murray Hopper.return to text

    3. EDI is a term that is generally used to denote the communication of commercial messages—orders, invoices, etc. Different frameworks for these have developed in different parts of the world—X12 in the U.S., Tradacoms in the U.K., and EDIFACT more widely. This only becomes a significant challenge for cross-border trade; many of those who have to trade internationally have no choice but to implement multiple messages to serve the same purpose with different trading partners.return to text

    4. EDItEUR was initially established in the early 1990s to develop specialized EDIFACT standards for the book trade and for the library supply chain. return to text

    5. Then a free-standing organization but now a governance committee in EDItEUR.return to text

    6. The name was originally an acronym—Online Information Exchange; however, like most good acronyms, it no longer is regarded as standing for anything—it is simply a brand name and one that is very widely recognized around the world.return to text

    7. For a more extended discussion of the challenges of e-book identification, see M. Bide, “The Challenge for Standards in the e-Book Supply Chain” Information Standards Quarterly (2011) (in press).return to text

    8. http://www.projectcounter.org/. return to text

    9. The Open Publishing Distribution Catalog System 1.0 (2010). http://opds-spec.org/specs/opds-catalog-1-0-20100830.return to text

    10. http://www.editeur.org/93/Release-3.0-Downloads/#Best%20practice.return to text

    11. See, for example, Cory Doctorow’s verbal attack on Richard Mollet of the U.K. Publishers Association at a conference at the 2011 London Book Fair reported in P. Jones, “LBF: Change Needed, But Publishing Still Vital.” The Bookseller April 12, 2011. http://www.thebookseller.com/news/lbf-change-needed-publishing-still-vital.html.return to text

    12. T. D. Jewell et al. Electronic Resource Management Report of the DLF ERM Initiative Digital Library Federation, Washington, DC. (2004).http://old.diglib.org/pubs/dlf102/.return to text

    13. See F. Cave, B. Green, and D. Martin. “ONIX for licensing terms: Standards for the electronic communication of usage terms.” Ariadne 50 (2007). http://www.ariadne.ac.uk/issue50/green-et-al/. return to text

    14. The Joint Information Systems Committee for higher and further education in the U.K.return to text

    15. See http://sourceforge.net/projects/ople/ and http://www.editeur.org/22/OPLE-Software/. return to text

    16. See for example: C. Oppenheim et al. RELI: A Project to Pilot the Development of a Licence Registry: Final Report. (2009). http://ie-repository.jisc.ac.uk/478/1/RELI_Final_Report.pdf.return to text

    17. http://www.bisg.org/committee-2-17-rights-committee.php.return to text

    18. http://www.bic.org.uk/7/BIC-Standard-Subject-Categories/.return to text

    19. http://www.bisg.org/committee-2-11-subject-codes-committee.php. return to text

    20. “Collective Management Organisations” (CMOs) have existed since the nineteenth century to manage rights collectively on behalf of rights holders (where individual rights management is impractical for reasons of scale—very high volume but relatively low value individual transactions). The original rights managed were the public performance rights of composers. In the U.S., organizations like the Harry Fox Agency and ASCAP manage rights for composers and music publishers; there are equivalent national organizations around the world. The nearest equivalent in the text domain in the U.S. is the Copyright Clearance Center.return to text

    21. Confédération Internationale des Sociétés d’Auteurs et Compositeurs. http://www.cisac.org/.return to text

    22. CISAC Announces Launch of CIS Net Using FastTrack Technology (2004). http://www.bmi.com/news/entry/233957. return to text

    23. http://eidr.org/.return to text

    24. www.crossref.org.return to text

    25. Results appear to be unpublished but are known to have been extremely favorable. http://cip.loc.gov/onixpro.html.return to text

    26. C. J. Godby. “Mapping ONIX to MARC,” OCLC Research. (2010). http://www.oclc.org/research/publications/library/2010/2010-14.pdf.return to text

    27. Functional Requirements for Bibliographic Records. (Report was first published in print in 1998 by K. G. Saur as volume 19 of UBCIM publications.) http://www.ifla.org/publications/functional-requirements-for-bibliographic-records.return to text

    28. G. Rust and M. Bide. “The <indecs> Metadata Framework; Principles, Model and Data Dictionary.” (2000). http://www.doi.org/topics/indecs/indecs_framework_2000.pdf.return to text

    29. http://cdlr.strath.ac.uk/VMF/. .return to text

    30. EPC. “The Answer to the Machine is in the Machine”: Frequently Asked Questions. (2011). http://www.epceurope.org/factsheets/the-answer-to-the-machine-is-in-the-machine-faqs.shtml.return to text