2014-02-04T11:37:08.121-06:00Well, I'm back to the weblog again because an idea has taken hold of me. I recently became aware of Open Publication Distribution System (OPDS) Catalog format, a syndication format for e-pubs based on Atom & HTTP. It is something like an RSS feed for e-books. People are using it to find and acquire books. It sounds like a natural fit for library digitization projects. An easy way for folks to know what's new and grab a copy if they like.
2013-05-06T10:23:43.957-05:00I've been doing this weblog for over ten years. It's getting a bit old. I'm going to try posting to https://twitter.com/Catalogablog">Twitter and see if that revives my interest.
2013-04-05T14:19:23.403-05:00For years I've used MARC Magician to create bibliographic records. It is showing it's age, updates have been few since 2005 and most of the newer fields and codes are missing. I can establish them myself, but it seems to me that is work the company should do for all their customers. I get the feeling the company is not interested in the software and are just letting it age into obsolescence.
2013-03-27T10:59:50.567-05:00The Public Knowledge Project has announced the 1.0 release of Open Monograph Press.
The Public Knowledge Project (PKP) is very pleased to announce the 1.0 release of Open Monograph Press (OMP). OMP is an open source software platform for managing the editorial workflow required to see monographs, edited volumes, and scholarly editions through internal and external review, editing, cataloguing, production, and publication. OMP will operate, as well, as a press website with catalog, distribution, and sales capacities.
OMP 1.0 improves upon the public beta released in September 2012 in a number of ways. It includes a number of stability bug fixes and enhancements, particularly to the production and distribution workflows, and creation of ONIX for Books metadata support. It also includes multilingual support for French, Greek, Brazilian Portuguese, and Spanish.
2013-03-26T10:10:17.410-05:00“Like”-able Content: Spread Your Message with Third-Party Metadata by Clinton Forry appears in the latest A List Apart. He looks at Twitter Cards and Facebook’s Open Graph protocol.
While implementing third-party metadata schemas will add to the content creation workload, that extra effort will provide a much better user experience across multiple platforms and devices, both current and upcoming. Crafting content in discrete chunks with an eye on universal application and flexibility is the way of the future.
2013-03-25T17:30:08.240-05:00Could anyone point me good statistics about science programming in libraries? Maybe some dissertations? Just not finding anything, but I don't have access to Dissertation Abstracts. Thanks.
2013-03-25T09:37:18.654-05:00The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.
2013-03-22T10:42:57.189-05:00Easter is approaching, so it is Peeps season. Time to review Peep Research: A Study of Small Fluffy Creatures and Library Usage.
Although scientific and health research has been conducted on Peeps, most notably that appearing on the Peep Research website (see http://www.peepresearch.org), we have noted an absence of research focusing on the ability of Peeps themselves to actually do research. To address this lack, we invited a small group of Peeps to visit Staley Library at Millikin University during the week of March 17-21, 2003 so that we could more closely observe their research practices. This was determined to be an ideal week for the Peeps to visit the library, as Millikin University students were on spring break. The research that follows documents their visit to the library and provides some evaluative commentary on our assessment of Peeps and library usage.The Georgetown Public Library Online Tour also features Peeps.
2013-03-21T11:49:21.133-05:00A proposal for a Tamashek romanization table is available for review. Comments on this proposed romanization table are being accepted until June 20, 2013.
2013-03-19T11:19:01.647-05:00A List Apart is giving away a free ticket to the IA Summit in Baltimore April 5-7. A few of the talks:
2013-03-18T10:06:34.132-05:00Dublin Core to PROV Mapping, W3C Working Draft 12 March 2013 seeks comments.
This document describes a partial mapping from Dublin Core Terms [DCTERMS] to the PROV-O OWL2 ontology [PROV-O]. A substantial number of terms in the Dublin Core vocabulary provide information about the provenance of the resource. Translating these terms to PROV makes the contained provenance information explicit within a provenance chain. The mapping is expressed partly by direct RDFS/OWL mappings between properties and classes, which can be found here.
2013-03-18T09:52:24.007-05:00A formal ontology for historical maps by Eleni Gkadolou and Emmanuel Stefanakis was presented at the 26th International Cartographic Conference, August 25 –30, 2013,Dresden, Germany.
Historical maps are a major component of our scientific and cultural heritage collections. Apart from the aesthetic value of the artifacts, maps also deliver valuable historical and geographic information. In order to use the historical cartographic information effectively, the semantic documentation of maps becomes a necessity and ontologies are suggested to achieve this. This paper examines how the top level ontology CIDOC-CRM “handles” historical maps and presents a formal description of the “Carte de la nouvelle frontière Turco-Grecque”, a map attached to the Convention of Constantinople that set the borderlines between Greece and Ottoman Empire in 1881.
2013-03-15T15:25:57.521-05:00MARC Concise Formats (2012 Edition) are now available to download as PDFs.
2013-03-15T11:29:47.894-05:00Announcement about using FRBR for journals.
Version 0.1 of PRESSoo, a conceptual model accounting for the bibliographic description of serials, now is available from the following address:
The intention, while drafting this document, was to fill in a gap acknowledged in the FRBR Final Report (section 1.3 "Areas for Further Study"):Certain aspects of the model merit more detailed examination. The identification and definition of attributes for various types of material could be extended through further review by experts and through user studies. In particular, the notion of “seriality” and the dynamic nature of entities recorded in digital formats merit further analysis.
PRESSoo is defined as an extension of the FRBRoo model, which in turn is defined as an extension of the CIDOC CRM model as well as an object-oriented reformulation of the original FRBR entity-relationship model.
PRESSoo was developed by a restrained working group gathering representatives for the ISSN International Centre and the National Library of France.
This document is labelled version 0.1 because it still has to be reviewed by a larger community, most notably the international ISSN network, the FRBR/CIDOC CRM Harmonization Working Group, and the IFLA FRBR Review Group. Version 1.0 will only be attained once PRESSoo has been amended and validated by that larger community.
Any comment/criticism/proposal welcome!
2013-03-13T14:00:59.619-05:00The Federal Geographic Data Committee (FGDC) as it transitions from FGDC to ISO geographic metadata has provided training for other federal agencies. Some of this has been recorded and the videos made available.
2013-03-12T12:19:48.444-05:00The March 2013 issue of the OLAC Newsletter is now available.
2013-03-11T09:45:18.778-05:00NISO Teleconference news.
NISO will hold its monthly open teleconference this coming Monday, March 11th at 3:00 PM Eastern time. This month, we will be discussing the recently published NISO RP-15-2013, Recommended Practices for Online Supplemental Journal Article Materials (available at http://www.niso.org/apps/group_public/download.php/10055/RP-15-2013_Supplemental_Materials.pdf). This document was jointly sponsored and published by NFAIS, the National Federation for Advanced Information Services.
Business Working group co-chair Marie McVeigh of Thomson Reuters and Technical Working Group co-chair Sasha Schwarzman of The Optical Society (OSA) will be participating on the call to describe the work and answer any questions.
Supplemental materials are increasingly being added to journal articles, but until now there has been no recognized set of practices to guide in the selection, delivery, discovery, and preservation of these materials. To address this gap, NISO and NFAIS jointly sponsored an initiative to establish best practices that would provide guidance to publishers and authors for management of supplemental materials and would address related problems for librarians, abstracting and indexing services, and repository administrators. The Supplemental Materials project involved two teams working in tandem: one to address business practices and one to focus on technical issues. This new publication is the combined outcome of the two groups' work.
The call is free and anyone is welcome to participate. To join, simply dial 877-375-2160 and enter the code: 17800743#. All calls are held from 3-4 p.m. Eastern time.
2013-03-11T09:40:45.650-05:00The cover sheets for the proposals and discussion paper presented at the ALA 2013 Midwinter meetings of the MARC Advisory Committee have been updated with the results of the discussions. They are available at:
2013-03-08T13:25:00.790-06:00Roy Tennant has posted a list of $a and $2 combinations for the 655 field. It is interesting to see some terms with high counts that are tagged local. Seems like those should be considered for being incorporated into an existing vocabulary. Electronic books with 37,605 uses seems a good candidate. Dissertations with 16,733 hits is another one to consider.
2013-03-07T09:34:22.722-06:00The other day Catalogablog turned 10. The first post dealt with RSS, which I guess was new back then. One of the two links is still valid.
2013-03-06T09:39:58.983-06:00In other NISO news, the latest NISO Newsline has been published. Topics include:
NISO is a membership organization that must be responsive to community needs and interests. As an organization with limited resources, it must also prioritize the many strands of activity that are taking place, to ensure we are working toward goals which will have the greatest impact.And a reminder that comments on the ResourceSync Framework Specification for the web "detailing various capabilities that a server can implement to allow third-party systems to remain synchronized with its evolving resources" are due by March 15.
To help prioritize our work, the NISO Architecture Committee is identifying the important technologies and trends that face our community. As part of this process, we would like the NISO membership to complete an online survey related to potential NISO directions and activities.
2013-03-06T09:25:06.595-06:00The latest news from NISO.
The National Information Standards Organization (NISO) announces the publication of maintenance revisions of two widely used standards: The Dublin Core Metadata Element Set (ANSI/NISO Z39.85-2012) and The Standardized Usage Statistics Harvesting Initiative (SUSHI) Protocol (ANSI/NISO Z39.93-2013). Both standards were revised to make very minor updates. The Dublin Core standard defines fifteen metadata elements for resource description in a cross-disciplinary information environment and is used as the basis for most metadata standards in use today. The SUSHI Protocol defines an automated request and response model for the harvesting of electronic resource usage data and is required for conformance with the COUNTER Code of Practice.>
"The DCMI Usage Board approved a change to the usage comment for the 'subject' element to eliminate some ambiguity with the 'coverage' element," explains Thomas Baker, Chief Information Officer for the Dublin Core Metadata Initiative, the maintenance agency for the Dublin Core standard. "The new version of the ANSI/NISO standard corresponds to version 1.1 of the specification on the DCMI website."
"The SUSHI Standing Committee initiated this revision of the standard to make two minor updates," states Oliver Pesch, Chief Strategist for EBSCO Information Services and Co-chair of the SUSHI Standing Committee. "An additional error code was added and the appendix about security considerations was updated to reflect technology changes and experience gained since the initial implementation of the SUSHI protocol."
"Standards do not drop into a black hole once they are published," states Todd Carpenter, NISO Executive Director. "They must be supported and regularly reviewed to ensure they are kept up-to-date. Both the Dublin Core and the SUSHI standard receive ongoing oversight from their respective Maintenance Agency and Standing Committee. The maintenance revisions just published are examples of how the standards are revised to address even minor issues found during implementation."
Both standards are available for free download from the NISO website; Dublin Core at www.niso.org/standards/z39-85-2012 and SUSHI at www.niso.org/standards/z39-93-2013/. Additional information on the use of the Dublin Core standard is available from the DCMI website at www.dublincore.org. SUSHI FAQs, schemas, and implementation information are available at www.niso.org/workrooms/sushi.
2013-02-25T10:14:57.613-06:00The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code lists for current usage in MARC fields and MODS/MADS elements.
2013-02-25T09:44:09.180-06:00Free Your Metadata is a site that describes using Google Refine and some extensions to clean and reconcile metadata, and automate the creation of personal, corporate and geographic names.
Clean up your metadata and discover how to handle those embarrassing errors.
Match your metadata with controlled vocabularies connected to the Linked Data cloud.
Even unstructured fields can provide meaning thanks to named entity extraction.
Once your metadata is in shape, it is ready to be published in a sustainable way.
2013-02-22T09:27:01.238-06:00News from the eXtensible Catalog project.
I happily announce, that after several months of development the eXtensible Catalog Drupal Toolkit 1.3 is just released.
The eXtensible Catalog Drupal Toolkit is the front end of eXtensible Catalog (XC) built on Drupal content management system. It contains a set of 25 Drupal modules, a custom theme, and installation profile, and a customized Apache Solr search engine. XC is a discovery interface built on FRBR and RDA-like metadata structure.
The release has a primary focus on data integrity, namely being able to successfully process record updates on a schedule basis. This includes new additions, updates and deletions of records. This release includes some Solr integrity fixes submitted by Kyushu University. The installation process for release 1.3 has also been reworked to include an implementation option using Drush that makes the installation substantially easier. If you have drush, the whole installation is only 4 steps.
We also created a custom Solr package which is pre-configured to the needs of the Drupal Toolkit.
You can find the installation instructions and release notes here:
I hope you will find it useful. Now we are working hard on creating the first stable release of the Drupal 7 version. Any comments, suggestion and feedback are more than useful. You can find all the project's issue tracker here:
The eXtensible Catalog project's website is available at http://eXtensibleCatalog.org