Subscribe: panlibus
Added By: Feedage Forager Feedage Grade B rated
Language: English
libraries  library  marc  opensearch  public libraries  public  search  service  services  software  talis  technology  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: panlibus


Welcome to the panlibus blog, where some Talis staff will muse, reflect, declare and who knows what about the library and information business and other loosely connected things. We’ll do this from a personal viewpoint, not representing Talis in any way

Updated: 2016-07-11T12:13:01.259+00:00


Panlibus has moved


We're now hosting our own blogging site at Talis.

Panlibus continues at

To help you stay tuned, here's the new location of the panlibus RSS feed:

Mobile Search - OpenSearch - RSS - Folksonomies - AJAX: When worlds collide


Now there is a title to conjure with, what was drifting through my consciousness when that one formed out of the mist! Well in simple terms lots of stuff. More exactly lots of apparently disparate stuff that seems to naturally gravitate together:RSS the now ubiquitous newsfeed protocol that underpins the Blogosphere, Podcasting, personalized Alerts, etc.etc.OpenSearch A9's build of a self-described simply accessed 'standard' search API on top of RSS 2.0. A subject that I may just have mentioned over the last couple of weeks.Mobile Search as thought through by Russell Beattie in his Web To Mobile Search posting, expanded on by Goobile.Folksonomies as well discussed by Ken Chad AJAX [Asynchronous JavaScript and XML] All about JavaScript in the browser being used to bind together background XML data retrieval with dynamic display interaction. Take a look at Google Suggest, Google Maps, and flickr to see the effects.So what's with the gravity then? The Mobile Search discussion is based around the prediction that we will soon be carrying around the equivalent of a Web Server, full of contacts, info, photos, music, video, calendar, etc. in our pockets. With always on networks, it could behave like one.To link everyone's pockets together [those that want to be anyway] is a classic folksonomy situation, with a dose of IM presence added in.So how do I get to that info, or more importantly know it is there, or get alerted to change. RSS, OpenSearch, and whatever they give birth to, that's how. That vision needs some routing through central resources like A9 or Flickr or bloglines or a corporate MS Exchange server or or whatever. Those services will know about the stuff and where it is but not necessarily need to store it.This is where AJAX comes in, by using this type of technology your user interface could be serenely displaying simple information to you on your pocket screen, or PC whilst behind the scenes it would be paddling like crazy aggregating access to what you are interested in. What are the benefits of that then? You'll soon have as much computing power in your pocket as NASA would have been prowd of a few years back, so it will be possible to carry out the processing required close to where it is needed. The data [your data] will be kept together where it is most relevant, with you. Your local device will only go and get something when it needs it, not having to download the whole planet to search for it.Far fetched? Only time, but not that much of it, will tell![...]

Yet more OpenSearch discussion …….


The release and subsequent discussions about Amazon A9’s OpenSearch protocol has identified some discussion points around what type and sophistication of search capability should be offered by Library and Information systems.

The latest shots in this coming from Lorcan Dempsey & Thom Hickey at OCLC, both referencing their colleague Ralph LeVan .

To caricature the situation:

We in one camp, have the information scientists and librarians extolling the virtues of a powerful flexible search systems allowing the user to describe in the finest detail, down to an individual part of a Marc tag, what they are searching for. Then being able to combine that with other equally detailed search elements, limited by many things such as language, format, dates, and author’s inside leg measurement. [I did say it was a caricature]

In the other camp we have the proletariat of users who find putting more than two words in to an Amazoogle prompt a bit of a strain.

The first group delight in a search screen with more prompts than you can shake a stick at, the rest have never clicked an Advanced Search link in their life.

So towards which group do the Library/Information system developers and suppliers concentrate their efforts, and develop protocols to support? I contend that we need to service both these communities, as fully as possible. Without the former there would be far less stuff catalogued for the latter group to reliably search for and find.

Thom questions if there is a middle ground between SRU and OpenSearch. I think the answer is that there is something between these two that is worth discussing. Whether it is in the middle, I’m not so sure. Ralph commented, and I replied in more detail than here, on one of my previous blogs on the subject.

Ralph has offered to help develop guidelines on how to make an SRU & OpenSearch compatible solution to emerge. His experience around Z39.50, SRU/W, and metasearch will be invaluable towards this. I am also happy to get involved in such discussions, maybe coming at it from the other end by wearing a hat bearing the legend “Unadventurous member of the Internet Proletariat”

“It's not a system that would impress a librarian, but…. “


Folksonomies remain in the news with Jack Schofield in last Thursday’s Guardian reporting back from the Emerging Technology conference in San Diego, California that “Folksonomy was the big, bad, buzzword”

Schofield asserts that folksonomies, as used by sites such as Flickr for sharing photos or for web links, would not “impress a librarian”. But “they are also important because this is probably the only viable way of tagging billions of items on the net. No one is going to hire millions of trained librarians to do the job”.

It’s not that librarians haven’t tried. In 1998 OCLC launched the CORC project turning its vast cataloguing expertise to “taming the Web” with the prospect of a catalogue of Web content on the scale of OCLC’s huge bibliographic database, the WorldCat. “Both full USMARC cataloguing and an enhanced Dublin Core metadata mode will be used” it was announced. More modestly at Talis we have Talis List, a web based reading list system that allows academics and/or librarians to “harvest” (in a manner not unlike delicious) and categorise web sites very simply and add them to a course “resource list” for students.

It’s not just “tagging” technology that is challenging librarians. As regular readers of this blog will know, Talis has been engaged in a project around RSS technology that has now expanded to include Open Search. Coincidentally OpenSearch was also featured at the Emerging Technology conference by Amazon’s Jeff Bezos. Richard Wallis has discussed our take on RSS and OpenSearch in more detail including his Talis Prism (library catalogue) OpenSearch proof of concept.
So coming back to the first point, librarians may not be impressed with what seem to be simplistic approaches to cataloguing, classification or search. We know the problems are complex. The point though is that we can see our comfortable, complex, feature rich but domain specific technologies and standards like MARC, Z39.50 etc being challenged from outside the domain by companies with a bigger problem to solve: --Web 2.0. That’s why, at Talis, we take them seriously and get involved.

Udell wooed by OpenSearch


To be honest, I wasn’t even planning to enable RSS subscription to InfoWorld search. It just came for free. When that happens, it’s a sign that things are deeply right.

Jon Udell on InfoWorld has a play with A9's OpenSearch.

When I heard about OpenSearch, I wondered how hard it would be to integrate my new view of InfoWorld search as a “column” in A9. As I soon learned, it’s almost trivial.
I know I've been banging on about OpenSearch a bit since it was announced, but in the same way that it's parent RSS has rapidly changed the way we find out about things happening, I get the feeling that A9 are going to get the credit for instigating a rapid change the way searching hangs together.

Don't get me wrong, OpenSearch is a long way short of a search utopia but with a little bit of evolution [a couple of 1.x updates and then a version 2.0] I believe it stands a chance of becoming the utility search protocol that could knit together much of the web's search nodes in to a cohesive unit.

Time will tell as to the quality of my crystal ball skills. Nevertheless, reading between his lines, Jon Udell seems to agree that there is something in this worth watching.

Podcasting from Open Stacks


I've just been listening to Greg Schwartz's of Open Stacks Podcast [mp3] on his experiences at the CIL show from last week.

Apart from him being deserving of our sympathy, he was obviously suffering from 'Man Flu' throughout the event, his insight on what he saw was interesting as well. Not just the presentations but the buzz around the show, and his observations on the communities of users who's radar we need to get the Library to show up on.

The main reason I'm blogging this is because its great to see libraries entering the world of Podcasting. Since the coming of Podcasting and rewritable CD's, my car journey home from the office has been a far more informative, interesting, and entertaining experience. (Yes I'm one of the few on the planet without an iPod)

People like Greg, and the guys from IT Conversations [btw I wonder what has happened to the mostly excellent Gillmor Gang, they appear to have fallen off the planet] have opened my eyes & mind to loads of things relevant, but not necessarily directly connected, to what we all do and think about.

Dave Errington, Talis CEO, in his keynote at the Talis Insight Conference last November predicted that Podcasting, along with IM, RSS, Blogging, etc. will start to gain greater influence in our world. Its great to see his prediction coming true so soon.

Anyway back to the cause of this posting. Well done Greg keep it up. Hope you get to Internet Librarian in October and Podcast from there, and maybe we get to meet up.


Panlibus now feeds into

A Talis demonstrator for the new evolution of RSS - A9's OpenSearch


You can always tell when a technology has become established, when it starts being used for something else. How many remember that http was a protocol just designed to shift hypertext around a network, or that SGML with its offsprings of HTML & XML, was just something to make typesetters life easier.Well with Amazon A9's announcement of OpenSearch RSS has reached that stage in its evolution. Whatever the arguments about what the letters RSS actually stand for [my favorite and the one in the specification, is still Really Simple Syndication], its use up until now has been about providing alerts or newsfeeds of events to users.The few variations on this such as Podcasting, and our own Personalised RSS, are still basicly all about alerting you of events. Even MSN's RSS search alerts you that a search is now returning some new results.So how has OpenSearch evolved beyond the original concept of RSS?Firstly the staring point. RSS is not only really simple but it is established. If you want to do anything with it from a development point of view, there is sufficient stuff out there for you not to have to bother with any of the low-level stuff. If you are in the Java world, just pull down Rome and away you go, similarly in the .net universe.Secondly, problems its implementation components solve are very similar to other problems out there. This is where OpenSearch gets in to the story. Newsfeeds provide lists of genericly described items in a results set. For a search engine to return an anser it needs to provide lists of genericly described items in a results set.So with the launch of OpenSearch A9 have created an instant comunity for search, that most engines would want to be part of, and have made it very easy to join.The best way to prove exactly how easy, is to do it.And over the last couple of days, that is what I have done. Talis now have prototype OpenSearch interface to the demonstration Prism Library OPAC.The following link takes you to the OpenSearch standard Description Document: contents describe the OpenSearch as provided by Prism, and the way to access it.The 'Url' element contains the Url used to access to the service, encoded in the OpenSearch Query Syntax. By replacing the '{}' encoded tags with values A9's service can construct requests to search and then page through sets of results, thus: rocket science as you can see. So it won't be long before A9 is not the only OpenSearch client on the blockIf you have been clicking on these links in your browser you will only be seeing XML. Try pasting the links in to your favorite RSS reader to see the effect. Better still try it from A9 [You will have to register & login, but its worth it] Enter the description url [] in to their Create New column page and press load.You will then see the description in a more readable form, and more importantly a preview of what the results will look like in A9 is loaded at the right of the page.The exciting bit, from the potential user's point of view, is that by clicking on the title of a result you are taken to the detail for the result, displayed in the Prism interface. If you were in a real library you could then go on and place a reservation request for the item, or discover which branches the item was held at, etc. As an aside this functionality is provided by yet another technology that is spreading its wings beyond its original concept, OpenUrl. But that's another story.So where next? The T[...]

Changing role of public libraries


I've come across some research that deserves a wider hearing - an article published in the Journal of Documentation Vol 60, No.6, 2004, p.632-652by Douglas Grindlay and Anne Morris has researched the causes of declining borrowing in UK libraries.

The strong conclusion is that increasing personal affluence is the single direct cause of decreasing issue figures.

This should be factored in to thinking about the future role of libraries - the mandate for their existence is changing away from issuing books.

When will XML replace MARC?


This is the subject line of a thread that's been running on the XML4LIB email list over the last couple of days. The question has been around almost as long as XML itself, but MARC is still very much with us. Several writers in the thread argue that the question is wrong: XML cannot replace MARC because they are different things. For me, though, that confuses three different components in the MARC world: the MARC standard (ISO 2709), the different flavours such as MARC 21, which are like application profiles defining content designators and lists of values, and content standards, predominantly Anglo-American Cataloguing Rules (AACR). ISO 2709 is a kind of extensible markup language designed for data exchange and so could be replaced by XML, but it can only be done effectively when the other two components are re-aligned to modern requirements and to the flexible power of XML. Not surprisingly, the Library of Congress' MARCXML framework is discussed in the thread. In a strict sense, it replaces MARC, i.e. ISO 2709, with XML. But it deliberately emulates precisely the restrictive structural characteristics of MARC, enabling round trip no loss conversion, to allow MARC data to be manipulated and transformed to other formats and contexts, using modern XML tools. Undoubtedly, this has been a tonic for the geriatric MARC, or (switching metaphors) it is a useful bridge or stopgap between the the old world of MARC and a new world, as yet not fully realised, based on XML. It allows a little more value to be squeezed from the huge investment of systems and data in MARC. Some writers in the thread, however, criticise MARCXML for not being the panacea that it makes no claim to be. Its structure means that it is not well suited to XML indexing systems so performance is sub-optimal and, more importantly, it is not capable of articulating metadata in ways that are now required. Several writers call not only for better articulation of the metadata but also for a different set of metadata elements, more suited to modern requirements for search, navigation, data presentation and interchange between heterogeneous environments. Peter Binkley (University of Alberta) puts it well:... we need metadata to aid not just searching but also clustering, linking to relevant external resources, etc. - all the things we can do in the new environment to enhance search results and other forms of access. The XML tools for using web services etc. are great and will get better much faster than anything MARC-based.Here, though, we move into the territory of application profiles and content rules. As several other writers in the thread point out, an area of activity that could be leading the way to a full replacement of MARC is that based on the Functional Requirements for Bibliographic Records (FRBR). In the publishing world, it provided the conceptual model for the bibliographic elements of the Indecs framework, which led to the development of the ONIX format. Now, its principles are being built into the next edition of the Anglo-American Cataloguing Rules, AACR3. Although AACR3 will be capable of expression in MARC 21, it will push MARC's capabilities closer to the limits. MARC records have been 'FRBRised' in a number of different initiatives with some success, but the work has clearly discovered shortcomings in the MARC format. MARC will not be replaced by a single, dominant and self-contained metadata format. We can no longer even scope the contents of a 'self-contained' record. Increasingly, we require and have the ability to connect pieces of content dynamically and unrestrictedly, as we move towards the semantic web. The 'replacement' will be a metadata infrastructure. This is well argued by Roy Tennant in his article A bibliographic metad[...]

Public Library Impact Measures Published


In a positive move towards defining a common purpose for libraries and one which delivers to Government priorities, the Public Library Impact Measures were launched this week. Details can be found in the MLA web site .

Parliamentary review of Public Libraries


The UK Parliamentary Select Committee on Public Libraries has now published its findings, it is available at:
The background evidence is available in a separate report:

Whilst giving credit where it is due, it identifies the patchy nature of library provision in the UK and lack of clear focus and priorities across all services:
We regard a situation in which core performance indicators, and gross throughput, are falling—but overall costs are rising—as a signal of a service in distress.

Our key recommendations are designed to focus attention on libraries' fundamental role in promoting reading and we seek to distinguish clearly between core functions and desirable add-ons (prioritising resources in favour of the former). There need to be far stronger links between national library standards (which themselves need improving) and effective mechanisms to encourage and enable library services to meet, if not surpass, them.

I'm sure this will provoke considerable debate!

Future of Public Libraries


Report from the Conference on 'The Public Library Service in 2015'.This conference was set up by the Laser Foundation to discuss the thought-provoking 'Futures Group Report'. I attended with high hopes of some visionary debate. The audience was made up of many of the people who run the nations public library services.John McTernan (an adviser to the Prime Minister) started with some challenging thinking on what the Government expects from Public Libraries - it should be no surprise that they're looking for change. His hypotheses for the future were: There may be no public libraries in 2015 The public library service may be nationalised - part of the British Library Public libraries services may be delivered at neighbourhood level. A number of clear steers on short-term challenges to libraries emerged: Public service reform will be applied to libraries (read efficiency savings) Local government is likely to be reorganised into 'effective economic units delivering a cluster of services'. Read what you like into this - to me it sounds like a soundbite for fewer local authorities (there is already talk about reorganising local government into regional/sub-regional/cities).He finished with the challenge that librarians have to want to change. The evidence I heard was of delegates showing willing to change but not knowing what to?Lord McIntosh from DCMS talked about new regional structures (there's a message here from the two government speakers - this is going to happen). His other message was about cutting back office costs to release resources to the front line (efficiency savings again), and encouraging library authorities to work together on service provision (so regionalise yourselves before we do it to you?).This was not a vision of a future public library service - more a view of future local government reorganisation - and this from the Minister for Libraries!Chris Smith, MP and former Minister for Libraries said more of the same - questioning whether libraries should be part of local authorities structure.So 3 people from Government talked about re-organisation - stemming from the need to cut costs in local government, but nothing about the role for or value they see in public libraries. Is this a reflection of the apparent confusion in the role of public libraries in the face of declining issues - there is no other service that libraries can claim to do well?More news is due soon from government with the imminent publication of the Select Committee Report on Public Libraries. We also await the results of the MLAC-sponsored report on book procurement in public libraries.Chris Trinick, Chief Executive from Lancashire County Council provided a positive view of the future - it's encouraging to find the man in charge of one of the countrys largest local authorities as a proponent of libraries. He still challenged whether libraries would be part of local authorities and - providing some alternative suggestions based on charitable status. Significantly, he laid down the challenge to libraries to take decisions on future services based on evidence of benefit (back to my recent post on value).Charlie Leadbetter, consultant and one of the leading thinkers behind Framework For the Future came closest to setting out a vision for future services. His vision is of a 3-fold service: Provide self service for the users who know what they want (get out of the way) Offer a personalised service for users who don't know what they want (offer a tailored interview/response service)Reach out to those who either don't know they have needs or can't articulate them (offer a very localised approach through pubs & supermarkets, etc). This picks up and expands on an area that many delegates were sensitive to in the Futures [...]

Metadata adds value: Photos plus metadata equals money.


Sharing photos, via websites, amongst family and friends is nothing unusual and now some websites are offering to market your photos to publishers and other markets enabling you to make money. Metadata produce the value-add and transform a picture of your holiday that you might otherwise only share with family and friends into product that a publisher will buy. fotoLibra is a good example of such a site and has been getting some attention in the business press and was featured in February’s Real Business magazine’s “50 to watch 2005”. fotoLibra, With their business model in mind, devote a lot of effort on advice on how to create metadata even (in the their forum) going into some detail about Dublin Core.

Established [picture] libraries are now busy cataloguing and digitising their huge collections, and searching can be a nightmare. That's where fotoLibra has a great advantage. The people cataloguing its images are the ones who know them best—the photographers and picture owners themselves. That means picture buyers can find exactly what they're looking for, quickly and reliably.“

So here is value from Metadata being created at source. There is however an interesting forum debate about the resulting problems of “keywording”. People often overdo the metadata and “throw the dictionary or thesaurus at it” I also found it interesting that they are using DOIs.

Flickr from Ludicorp research is another photo sharing site that is getting a lot of attention and was featured in the Guardian recently. It is interesting that you can choose to license the photos you upload to Flickr under a Creative Commons license. They too place a lot of emphasis on metadata or “tags” Both these sites, because they are about sharing are good examples of Folksonomy in action. I noticed Ludicorp’s president Stewart Butterfield is sharing a platform to speak on “Folksonomy, or How I Learned to Stop Worrying and Love the Mess” at next month’s emerging technology conference in San Diego.

Value and the future of public libraries


You know that a movement might be forming when two challenging ideas arrive on the same day.

1. I was reading an excellent article first published in American Libraries about value and public libraries (thanks to Lorcan Dempsey for the reference), at: This quote particularly resonated:
"Valuable does not necessarily correspond with the library staffs ideas of importance."

2. Understanding value is an area that is challenging UK public libraries evidenced by todays publication of the Laser Futures Group report on the Future of Public Libraries.

Put these together and stir the pot.

One of the options the Laser report mentions is that the running of library services be handed to the regional MLAs and removed from local authority control. This would seem to be a way to remove duplication and inefficiency, but begs further research and brings us back to the question of value – the answer is in the eye of the beholder, so maybe we should ask the users. Does a large authority such as Essex provide better ‘value’ than a small one like Kingston-upon-Thames just because of scale economies?

I was in the audience at PLA this year when Jeff Jacobs, Director General at DCMS gave a challenging speech (no powerpoint!) on the need for libraries to prove their value. He got a frosty reception to say the least, but I thought he made an absolutely fair point - one that appears to be gaining some currency now. His point was to ask what benefit libraries give in terms of economic contribution to their communities, rather than an emotive argument about the value of culture.

I question what public libraries are doing to understand where they add value. I'm hoping that other people will be asking this question, even trying to answer it at the forthcoming 1-day conference on 4th March that the Laser Foundation have organised to discuss this report – possibly the most important day this year for UK public libraries. I'm very much looking forward to it and I'll blog the day.

Talis working in partnership with Amazon


I thought it would be worthwhile sharing with the blog community a new partnership that Talis has announced with none other than Amazon. A press release has just gone out and some of the booktrade press have picked it up, including The Bookseller and Book2Book.As the press release underlines the Venice project was about offering more profitable ways for libraries to dispose of their stock. In this case, we worked with Amazon to enable our library customers to connect with Amazon Marketplace.We have to thank East Renfrewshire who really went for the idea from the outset, and were willing to devote their time and staff resource to testing the concept. They initially loaded 500 titles using a batch process developed by Talis and sold 20% of the stock within a 12 week timescale. The return they got from selling the books on Amazon's marketplace, far outstripped the revenues they would have made had they pursued the more conventional approach of a booksale AND I would go as far as saying the additional staff resource required to run the process was more than recovered. Well, East Renfrewshire certainly seem to think so because even though the project itself has now ended, they will continue to run the software.One of the really neat features about Venice was that we used Amazon's Web Services to pull data into our application. A library could view for the first time:the item's list price (as new)the second-hand list price (lowest and highest) the number of used copies on sale for that particular itemjust by scanning the barcode of the book with a reader.Armed with this information East Renfrewshire could make sound decisions on what was worth selling through Marketplace, and what they could dispense with more effectively elsewhere ie. via booksales etc. It also allowed them to "play the market". Their sales strategy was to undercut other vendors on the site, in order to get the books out of the door as quickly as possible.The project raised many interesting points, which are worthy of further consideration:Purchasing Behaviour - East Renfrewshire found that by adopting the Venice approach to stock disposal they began to start analysing their front-end acquisition process. Dare I say it, they could consider the idea that the speed in which stock via Marketplace got sold might influence their future purchasing decisions. ie. let's buy more of those kinds of books because we can dispose of them more quickly, efficiently and profitably at the end of their shelf-life.Visibility of Value - the data we pulled in to the application via Amazon's Web Services, for the first time gave East Renfrewshire a real true sense of the market value for their holdings. It went some way to dispelling the notion that a book ceases to have a pecuniary value, once it has been used. More than that, it has highlighted to the library that some of their older stock has more than held its value. It has actually become in the eyes of the marketplace "rare" and the price reflects this. We have to ask, would so many of the public libraries' stock of First Edition Harry Potters have disappeared, if the stock manager had a tool like Venice to keep them informed? Optimisation - As car owners we make a point of knowing when in our car's life it has reached its optimum resale value - whether we act on it or not is a different matter. Venice has offered the same optimisation opportunity to libraries for their books. Over time, a library using the Venice approach could understand that a book's resale value is optimised at a given point, and adjust their withdrawal policies accordingly. Potentially, the shelf life of a library item could be shortened, but the reve[...]

Ground breaking Library Personalised RSS


Talis, in partnership with Northumbria University Library, have launched a trial of personalised RSS (PRSS) feeds for Library users. This trial is part of the Talis Research Project Bluebird. Members of the trial and other interested parties, interact on the Talis Bluebird Forum.

Subscribers to their personal feed receive alerts from their Library account such as 'Item due for return in 3 days', or 'The item you reserved is now awaiting collection at the Library', or 'Your overdue item has already attracted in excess of £2.00 in charges'. The feed items provide a link to take the user, without an interviening login challenge, in to their Library interface at the apropriate page to take the required action such as renew the book on loan.

To illustrate the issues surrounding the requirement for alerting Library Users, to describe the technology used, and to give an overview of the trial I have published a white paper Personalised RSS for Library - User Interaction.

We have set-up a Demonstration PRSS Feed to show how the loaning activity of a fictitious user [Mr Draco Malfoy] would be represented in his Personalised RSS feed. Over the next couple of months Mr Malfoy will reserve, loan, and return (often late) items from the Demonstration Library to provide pseudo realistic RSS traffic.

So what is ground breaking then?

Firstly, Talis are the first LMS/ILS supplier to demonstrate live Library Borrower/Patron account data alerts using RSS.

Secondly, although there are many thousands of RSS feeds around there are very few that are personalised to a specific user on a specific system. Up until now RSS has been [as the most popular definition of those three letters imply] about Syndicating published information in a Really Simple way, to anyone who can subscribe.

Along with RSS Feeds to return search results, as announced by MSN [ MSN Search: Panlibus ] and other Library suppliers [theshiftedlibrarian - "ILS vendor to offer native RSS feeds out of the catalog"], PRSS opens up the third generation of RSS applications. (Podcasting ushered in the second generation. So many generations and not yet a teenager! )

PRSS has the promise to open up a whole new world of proactive alerting for subscribed users, and you heard about it here first folks!

The image of Libraries just being places with lots of books where there is not much innovation is definitely old hat!

The changing face of libraries


It was good to hear the BBC debating libraries again on Tuesday. (BBC Radio 4 "Shop Talk") Tim Coates was there doing some “retail” challenging and Andrew Stevens from the Museums Libraries and Archive Council (MLA) agreed that bookshops have taken the lead in marketing and presenting their wares and libraries can learn a lot from them. (btw Andrew did a keynote presentation to the Talis Insight conference too in November).

Heather Wills from Tower Hamlets explained their Idea Stores and how the initiative was based on major market research so they were providing what people wanted like better locations, 7 day opening and access to IT. She took pains to emphasise the role of books and that borrowing was going up (in contradiction of the national trend). Books have become a sort of cipher to represent unchallengeable cultural value I think. It’s assumed we all agree books are good so we don’t have to go further and debate their underlying cultural value. Indeed libraries are (by literal definition at least) about books and certainly this is what the last November’s report to Parliament on public library matters states as the “core purpose of libraries” To my mind the underlying value of books (and more widely of course the process of reading and literature itself) was far better expressed in the same month by Philip Pullman in his Guardian Article (itself an extract from an article in Index on censorship) and the danger to democracies if they “forget how to read” and in effect lose their imagination and demand that reading is “for” something—in essence only to support a particular agenda or outcome. That why I think the work that Rachel Van Riel of Opening the Book is doing is important. She also gave an invigorating keynote presentation at the Talis Insight Conference in November

The value of culture –the value of libraries


Paul Miller, The Common Information Environment Director made some interesting comments on 4th February in his blog about the challenge of "placing a value on culture". He cites the Demos report of December called "Capturing Cultural Values" by John Holden. "Cultural organisations and their funding bodies have become very good at describing their value in terms of social outcomes.." I certainly view libraries as "cultural organisations" so this issue is of very real concern to Talis because we provide technology solutions to libraries and if they don't see value we won't sell our products and services. It's (relatively) easy to make the case (around efficiency and cost saving) for technologies like RFID but it can be harder to assign (cash) value to the bigger and more important issues. As our products evolve and make more and more resources available in easier and more digestible ways we confront real business model problems. Users want access to more and more content whilst the owners/providers of content are struggling to find ways of making money from their content over the internet and we've seen a real difficulty in find workable business models.

But maybe this is just inevitable at this stage in the technology's evolution? We've all let Google dazzle us with their programme to digitise the collections of major libraries like the Bodleian in Oxford. This is part of Google's mission to "organise the world's information" but they admit they still don't know what the business model will be. Fabio Selmoni, Google's European Sales Director make a fascinating comment during Tuesday's BBC radio 4 programme "Shop Talk". The theme was the changing face of libraries. When challenged by the presenter Heather Payton about how the project will make money he remarked that they weren't preoccupied with the business model. He was really admitting that right now they don't know how they will make money. At this stage it was a research project that was being done because it fitted into Google's vision. In a sense isn't that just how our public libraries operated in the past? I mean there was a strong cultural vision. Is some of that being lost with the emphasis on "outcomes" and CPA ratings?. Of course the vision was based on a business model where ultimately the taxpayer pays. So who will pay Google? Advertising? Hmmmmm

Visualisation - the future in OPACs


As a community we have to recognise that OPACs are not user-centric. Visitors to the library rarely use them for anything other than specific location information, and the idea that web-enabling OPACs in the current form will extend their reach to a wider audience is flawed.

We have to embrace the notion that "NextGens" will continue to switch off from text-oriented systems as they become increasingly attuned to more visual and immediate stimuli. We see this in the development of learning objects for teaching and the extensive reach of the gaming culture.

Its good to see search engines like Groxis, grasping the nettle. And software companies like Anacubis are already manipulating business intelligence from Hoover's and others to represent commercial relationships in visual formats, that are far easier to absorb. In addition, Anacubis is recreating Google searches in a visual format which is worth a look.

Talking of which, a colleague of mine pointed me to, they have used visualisation technology to produce Google sets in a visual format.

So what does that mean for libraries? Well, I am a fan of the Amazon feature "the person who bought this, also bought....." but am frequently disappointed. The information presented can occasionally turn up gems, but rarely do I find subject-specific or genre-specific information. Quite frequently, the purchasing patterns reflect a desire to buy material by the same author/artist, which makes me feel that I could get the same value from viewing an author/artist's bibliography/discography.

Would a libraries's borrower information reflect in the same way? And would there be additional value in a visual format? Not sure, but think it could be interesting, and potentially stimulate interest from disengaged users?

Web Services and Metasearch – VIEWS on the subject


Talis is a founder member of VIEWS – the Vendor Initiative for Enabling Web Services.

VIEWS is an industry-wide cooperative effort to leverage libraries’ expertise in understanding, processing, and delivering information with the functional and practical efficiencies delivered through Web Services.

Read more about Talis and VIEWS

As a bit of a new boy to the world of standards bodies and the like, I was very intrigued to see what goes on in those international conference calls. Also what the process was that somehow ends up producing some of the the documents I have cursed in my time as a developer. Many is the time I have been known to mutter quite loudly “How the flipping-heck can they make the description of something simple, so complex and obscure!” Or similar words to that effect ;-}

So with a mixture of interest and trepidation I volunteered to be on the VIEWS sub-committee looking at Metasearch and how Web Services could be relevant in that area.

I have been banging on about proper integration for the last few years. By ‘proper’ I mean Web Services based, Service Oriented Architecture, real live integration. None of that whimpish ftp’ing batch scripts, or web site screen scraping for me.

So I get the feeling that my initial “lets see if we can recommend a SOAP/WSDL api for metasearch then” approach was a little extreme for some, who were looking to produce a paper that postulated on the possibility of Web Services in Metasearch being something worth investigating.

Still, like a good committee should, we ended up with what I think is a well-balanced paper, that at least recommends something sensible.

The White Paper has just been published, it is now up to us and the rest of VIEWS to take it beyond the recommending white paper stage. Time will tell.

You can find it here and here. And what did the new boy think of the process? Interesting, frustrating, time consuming, and rewarding, are words that come to mind. And yes I would volunteer again if I believe I can add value to the process and or the results. Which I hopefully did this time.

Revolution or another competitor in library software?


I noticed a paper called THE COMING REVOLUTION IN LIBRARY SOFTWARE by David Dorman on IndexData’s Web site. I checkout this site regularly because they provide good, reliable, efficient open source technology for search and retrieval of meta-data with packages such as Yaz and Zebra. In the paper he argues a new business model for delivering software to libraries called commercial open-source will cause a paradigm shift in the market. He makes a plea for libraries to fund the initial development costs with a 10 point plan. He sees further development and support being charged to customers. This doesn’t seem so different from a traditional commercial model where development costs are recovered from customers through purchase costs and further development is funded through recurrent payments. He claims open source software development results in ‘less expensive and better designed software, and speedier development’ than development by traditional vendors. In the competitive market of library software supply this seems difficult to justify. Competition is driving down customer costs because cost is a factor in winning bids. ‘Better designed’ software can give more reliable and usable solutions generating fewer support calls. Lower maintenance costs are attractive to vendors because it reduces costs making lower recurrent charges possible. Lower total cost for customers over the lifetime of a system is an important competitive advantage. Vendors seek speedier development to reduce time to market as competitors fight to attract new customers. He says for commercial open source vendors ‘Development, rather than being an opportunity to sell more licenses, or a burdensome overhead cost to be avoided if possible, becomes the primary revenue generator’. My experience with Talis is we enjoy our development; our purpose is to develop solutions for our customers. And if we don’t develop attractive solutions we don’t sell so development is central to our success. On quality he implies peer review of open source code is more likely to achieve high quality software than a software engineering process incorporating reviews at each major milestone of analysis, design, implementation and test. In addition to software development processes Talis publishes end user documentation, database schema, stored procedure code, scripts for useful utilities. He suggests abandoning proprietary development tools in favour of open source alternatives. We use third party programming tools such as Microsoft’s Visual Studio, a best of breed tool, to speed delivery and enhance quality. On commercial open source he says ‘This Model requires a new and closer relationship between vendors and libraries’. Isn’t this what all vendors are striving for? Talis fosters a community who share useful tools in source code provided by customers and partners through our Talis Developer Network. His vision assumes there are cohorts of willing library programmers with sufficient knowledge, skill, free time and resources to develop library software. Instead I see a community of customers behaving with enlightened self-interest to pass on experiences to fellow customers. In practice my guess is IndexData will have a core development team with a few external trusted developers providing code fixes. I don’t see a paradigm sift, I see another competitor in the library software market. [...]

Mobile and PDA technologies and their future use in education


This is the title of the latest JISC Techwatch report, published in November 2004, whch I've just dipped into. Here's their overview:
In recent years there has been a phenomenal growth in the number and technical sophistication of what can loosely be termed 'mobile devices' such as PDAs, mobile phones and media players. Increasingly these devices are also internet-enabled. This JISC report reviews the current state of the art, explores the potential uses within education and discusses some of the trends in technological development such as wireless networking, device convergence and 'always-on' connectivity.
An email update from one of the authors, via the Techwatch email list, last week, points out that there remains considerable uncertainty ('fog') around fast wireless access technologies, but the following conclusion serves to emphasise, for me, the need for libraries and their systems suppliers to be focusing on delivering data and services to these technologies:
... widespread adoption by students and staff of always-on mobile devices will partly be driven by the development of wireless broadband networks that can deliver the Internet to these devices. As the competition to deliver high speeds through the various technology paths increases so the likely time to market for low cost consumer solutions is likely to fall. As currently planned by manufacturers this kind of high speed access should be relatively normal by the end of the decade.
Although this has an academic library perspective, it will surely apply equally to actual and potential users of public libraries because this is about general consumer technology. Once again it's a reminder to take the library to the users, use the technology that they use (redefining the meaning of 'mobile' for libraries!), or be ignored.

Amazon make queueing a reliable experience


An Amazon Web Services announcement which snuck under my radar recently was the launch of their Beta [aren't they all nowadays!] Simple Queue Service.

This is not as you may at first think something to keep the people waiting, behind the person checking out every book on their favourite subject whilst returning all the items found in their three year old's toy box, amused so they don't hassle the person waving the bar-code reader when they eventually arrive at the front of the queue.

No, this is a bit of technology delivered as a service which should excite the developers of interactive applications which may or may not access Amazon content. It provides a general purpose service to manage a set of queues of up to 4,000 data messages of up to 4 kbytes in size with a message maximum life time of 30 days.

When developers are building applications and services which involve the interaction between more than one system, they very quickly bang up against the need to pass messages between those systems. Most developers will tell you that this is not rocket science, even when the message delivery has to be reliable [some form of guarantee that a message is not lost, or incorrectly delivered].

The problem that is often tripped over when implementing such systems is that for messages to be delivered reliably they need to pass through a messaging system which keeps temporary copies of messages and manages queues etc. Such systems need to be managed, maintained, backed up, etc. The overhead of such housekeeping operations, is often considered to be such a pain that it can detract from the business case for delivering a new service.

So what are Amazon up to in launching something that will be hidden under the hood of other peoples applications, and unlike their other Web Services will not necessarily lead to clicks back to buy stuff from them?

Firstly, I would expect that it is a low cost service to provide. They almost certainly have been using this technology in-house to support their own services for sometime. Adding a few publicly visible servers to their set would not add much overhead.

Secondly, are they dipping their toe in to the emerging market for the supply of software component services. A software equivalence to Sun's 1$ a CPU cycle service?

Whatever their commercial strategy on this, what they are doing is floating it on the trusted Amazon brand.

OK you want to delegate off to some third party the job of looking after the messaging queues that underpin your application. So who do you pick? Someone you trust, with a 'good name' so why not Amazon. Would you choose them over some little known hosting company, or maybe another little company with their headquarters in Seattle?

I'll leave you to ponder on that....

Meditations from ALA


Ken Chad Executive Director, Talis Where's the innovation? "Where do you see the innovation coming from ?" asked Andrew Pace from North Carolina State University towards the end of Friday's "View from the Top" seminar. The question was addressed to me and the other panelists - the CEOs and chief executives in the library and information industry. Judging from the response of the main US library system vendors not from there! Roland Dietz, President and CEO of Endeavor (owned by Elsevier) had earlier, and not surprisingly singled out Google as a major challenge. So is the innovation going to come from outside - Amazon, Google even Microsoft? For me at Talis this is a fundamental question. We are putting a lot of investment in smart people and have some smart ideas too. Of course we have to keep our focus on evolving our core products and services but we won't survive long unless we innovate. Where's the value? The seminar provoked a lot of discussion about the "value" of libraries and how that appears not to be expressed the dollars spent with the library automation vendors. Libraries (especially public libraries) are faced with budget cuts. Money is being spent not so much on library technology but rather on other enterprise wide systems for the institution as a whole. We see this too in the UK . Money (lots of it in some cases) in universities and local authorities is going into human resources (or CRM) systems, finance packages, portals and, notably in HE on Virtual Learning environments (VLEs). Bob Walton, the Vice President for Business and Finance at the College of Wooster and a recent purchaser of such systems wondered why it is that, in his view, compared to library systems, these other enterprise systems are:- Less sophisticated Less reliable More expense in terms of software licencing More expensive in ongoing maintenance Take much longer (three times longer?) to implement Hugely (ten times?) more expensive in terms of training Maybe its simply because there is less competition? That market is continuing to consolidate --as will the library market. But the short answer is that's where the institution sees the value. It's true that over the last 25 years librarians and vendors have jointly done a good job in implementing and developing reliable high quality systems. Rob McGee (head of RMG consultants) remarked that maybe, as the library vendors had done such a great job, they should get into this "ERP" sector? "The entry costs are now too high" thought Vinod Chachra from VTLS. Value on my mind The value thing is on my mind a lot of course. On the plane over I was reading the Guardian Life section. The job ads in the IT section are just one way to see what's going on in the industry, especially in universities. I note that a major UK university is going to be spending around £25,000 a year (more if you take all costs related to employment into account) on a person to primarily work on integrating the library system with the VLE. A friend of mine recently got a similar job at another university. It's not a short term contract job either, so over five years that's a substantial sum (certainly compared to the cost of library software) being spent on just one aspect of "integration". That's just some indication of where universities see the value and, not unsurprisingly, it's about improving the overall learning environment. So what about public libraries? Where d[...]