Subscribe: The Medium is the Message
Added By: Feedage Forager Feedage Grade A rated
Language: English
blog  faculty  information  library  new  research  review  scholarly  service  technology  time  university  web  work  year 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: The Medium is the Message

The Medium is the Message

On libraries, technology, innovation and trends (with respect to Marshall McLuhan)

Updated: 2018-04-10T06:55:16.735-05:00


Got a Mobile Device? You May be Working an Extra Day a Week


Having a mobile device has changed the way that I work. Being always connected allows me to see  emails and status updates from anywhere and at any time.  I can't help but see email, Twitter messages, and other alerts when using my iPad at home at night, or on vacation, with the audible notifications or new email messages popping up on the screen as I watch LOL cats.

A recent survey by mobile management provider Good Technology , as reported in InfoWorld, indicates that my reliance on my mobile devices means I may be working on average seven more hours per week as a result. Infoworld published some of the survey's key findings:
  • 80 percent of people continue working when they've left the office, for an average of seven extra hours each week
  • 60 percent do it to stay organized, half because customers want quick responses, and 31 percent just find it hard to switch off at night
  • 68 percent of people check work email before 8 a.m., with an average first look at 7:09 a.m.
  • 50 percent check their work email while still in bed
  • 40 percent do work email after 10 p.m.
  • 69 percent will not go to sleep without checking their work email
  • 57 percent check work emails on family outings
  • 38 percent routinely check work emails while at the dinner table

The Key to Innnovation? Ignore Everybody


One of the techniques which libraries use to plan innovative services is to uncover the needs of their customers. However, I always wonder if customers really know what they need tomorrow. They usually only know what they need today. As a result, being innovative may require librarians to ignore their customers. 

Copywriter and cartoonist Hugh Lacleod wrote a series of blog posts which went on to become "How To Be Creative". His book, Ignore Everyone and 39 Keys to Creativity contains tips that grew from a series of blog postings that I feel can be applied to librarians looking to become more innovative:
1. Ignore Everybody
2. The idea doesn’t have to be big. It just has to be yours.
5. You are responsible for your own experience.
6. Everyone is born creative; everyone is given a box of crayons in kindergarten.
8. Companies that squelch creativity can no longer compete with companies that
champion creativity.
10. The more talented somebody is, the less they need the props.
11. Don’t try to stand out from the crowd; avoid crowds altogether
18. Avoid the Watercooler Gang.
20. The choice of media is irrelevant.
22. Nobody cares. Do it for yourself.
27. The best way to get approval is not to need it.
30. The hardest part of being creative is getting used to it.
36. Start blogging.

Augmented Reality and Libraries


In the movie The Terminator, the viewer is taken frequently to the Terminator's point-of view. We know this is Terminator's POV because there is image digitization and the people he is chasing are more luminous than objects in the foreground and background.In the margins of the viewpoint there are scrolling columns of characters, including numbers and acronyms. The data changes so rapidly that it leaves no doubt that we are viewing the world as the Terminator does.Science fiction? Well, yes. However, parts of the Terminator's POV are no longer sci-fi.Augmented reality (AR) is the application of computer-generated imagery embedded into live-video streams as a way to expand information as it relates to the real-world. Through the use of AR technology, information about a user's surrounding environment, and the objects within it, are stored and then retrieved as an information layer on top of a live real world view.The technology behind AR requires a camera, a target, and software which renders contextual data, images, or 3D animations on top of the live image. The target could consist of a graphic or a physical object. Google's recent announcement of their Google Glasses prototype is another step in the development of applied AR: allowFullScreen='true' webkitallowfullscreen='true' mozallowfullscreen='true' width='320' height='266' src='' FRAMEBORDER='0' />  As far as using AR technology in libraries, Ken Fujiuchi proposes possible uses:"When someone finds a book in the library catalog, they can have the option to snap a QR code or unique image of the book, which will first store the information about the book. Then the user can first be directed to a specific section of the library, and once they are in the right section they can use a mobile device to scan the book spines to start being guided towards the book they are looking for."Helene Blowers paints this scenario:"When I shift my thinking about AR apps to the physical library space I see our whole collection opening up before our eyeballs. Imagine the ability to walk down an aisle and see the reviews and popularity of an entire shelf titles just by pointing the camera lens on your phone at the spines (or outfacing covers)."Bo Brinkman, associate professor at  at Armstrong Institute for Interactive Media Studies at Miami University (OH) has created a prototype augmented reality shelf reading application: allowFullScreen='true' webkitallowfullscreen='true' mozallowfullscreen='true' width='320' height='266' src='' FRAMEBORDER='0' />As is the case with many emerging technologies there are many competing standards and a marker created for one application is not viewable using another. String Labs makes both a reader and test targets available to experiment with the technology. One can also play with creating AR experiences using applications such as Aurasma Lite for the iOS. Here are some other possible uses for AR, with the assumption every information source and device is networked:- Scan a building to find out if study rooms are available- Scan a building to identify hours of service, or which librarians are on duty. Touch screen to contact (text, IM, etc.)- 3-D images of special collection artifacts are viewable from a QR code or bibliographic record- Physical exhibits and artwork can provide supplemental content and materialsDo you have any ideas?Resources: Educause: 7 things you should know about Augmented RealityHow Stuff Works: Augmented Reality [...]

Using a Course Management System to Manage Promotion and Tenure Review Documents


First off, let's get right to the first question. Gasp!!! Yes, it has been over a year since my last post to this blog! The cob webs have taken over and the number of readers has dropped. While I have a regularly scheduled event on my calendar to create new entries, well, work has gotten in the way. Fortunately, my lame excuse is also a good topic for a post!As regular readers of this blog are aware, one of my regular themes is related to scholarly communication. More specifically, the broadening of what is considered as scholarship for promotion and tenure purposes. During the past year, I have been serving as the Procedures Oversight Designee (POD) for the Ohio State University Libraries. In short, my role is to help ensure that the P&T reviews follow the procedures as defined the Board of Trustees, the Office of Academic Affairs, and the Libraries.In addition to the normal workload required when taking on such a role, there were many significant changes made to our review process PLUS we experienced a bit of a 'baby boom.'  Until this past year, a committee of tenured faculty consisting of about 1/3 of the eligible voting faculty review all cases. Given the size of the review group and the fact there are normally around 3 cases to review annually, materials were made available in a single paper dossier stored in our HR office. Committee members checked them out and reviewed them on site. Given the scale, the system worked.Last year, our faculty voted to change the review procedures so that all the eligible faculty would review all the cases. At the same time, the number of cases scheduled for review was 4 times larger than usual. Having almost 40 individuals descending upon HR to review over a dozen dossiers made little logistical sense. This convergence of events required a significant rethinking of the method used to make the review documents available.The solution was to create an eDossier system using of our course management system.A 'course' was set up for the review materials with each candidate given their own content tree. All the materials that made up the physical dossier were were scanned and uploaded into the system. The total number of documents for all the candidates was just over 600. All the eligible voting faculty were added as students and were grated access to those dossiers that they were eligible to review. Access to materials was turned on and off as required by the review schedule. While this approach required a significant time commitment on my part, it really represented a small percentage of the time saved collectively by faculty reviewers since they didn't have to take a trip over to HR to read paper dossiers. Instead, the dossiers could be reviewed online where ever and whenever. I had one faculty member comment that they even reviewed materials on their iPad while waiting at an airport.  There will be additional workflow efficiencies in future reviews since documents from pre-tenure reviews will be added into the system as they are made available.Based on my experience, one of my new goals is to develop a plan that uses a similar approach to distribute materials to external evaluators. This will require buy-in from the Office of Academic Affairs. While the POD role is considered an overload responsibility all the changes turned it into a full-time job at times. Yet, I still had to manage it plus my regular job responsibilities plus keep a reasonable work-life balance. Something had to give, and it was this blog.So, please accept my excuse for the long time between posts. It's good to see you again [...]

Should Libraries Create Native or Web Apps?


I have been having an increasing number of conversations with colleagues about the creation of mobile apps. Much of the conversation is not about IF apps are needed, but instead they are focused on how apps should be developed.There are two different types of mobile apps with each technique having advantages and disadvantages.The apps one would find in the iTunes App Store or the Android Marketplace are known as "native" apps. Native apps are pieces of software that must be installed on the device and in most cases are downloaded from a distribution point. A Web app, on the other hand, is not a piece of software but a web site optimized for viewing on mobile devices. A well designed Web app can have all the look and feel of a native app.Develop a Native App if: your library needs to take advantages of all the features built into the device itself. For example, to vibrate the phone or use GPS. However, this will be changing soon as HTML5 rolls out. Web application developers are already using solutions like PhoneGap, an open source framework suite that provides support for a variety of device features on a variety of platforms. (video)your library needs to make sure content or service is available offline. If the core purpose of your application is to make your content available without an Internet connection, then a native app is needed.performance and user responsiveness is crucialyour library is looking to try to make money directly from the sale of the appyour application needs to access the device file systemDevelop a Web app if: your library web site has all the same content that will be featured in the appyour library is interested in potentially reaching users on different devices and platforms with the same app. An Apple native app can only be used on iDevices and is not easily ported to other platforms such as Android and BlackBerry. Web apps are platform-agnostic.your library wants its app content to appear in search engine results. Library users are begin to demand that library mobile content shows up in those results optimized for mobile devices. Content is a native app will not show up in Internet search results.General considerations:Native app development cold be more expensive than building web apps since a greater skill set is required to build apps for multiple platforms.Native apps requires the use a software development kit supplied by each operating system creator.Developing apps for multiple platforms would require a maintaining and creating enhancements for each.Native app user interfaces tend to be smoother and takes greater advantage of the full graphics capabilities of a device.Web apps require round trips to the server where the app is hosted whereas with a native app that time is almost instantaneous. Web app content us more current because it refreshes itself from the network. Most native app stores require approval. Web apps can be deployed immediately. Native apps require updates to be installed. Web app changes are immediate.Native apps may be more secure.Resources:Meredith Farkas. The Library in Your Pocket: Mobile Trends for Libraries Brian Fling. Mobile Design and Development: Practical concepts and techniques for creating mobile sites and web appsLorraine Paterson. Designing for Mobile Devices in Higher Education Research[...]

Technology in Use at 2011 North American International Auto Show


My buddy Jeff and I took our annual trip up to Detroit to check out the North American International Auto Show this past Monday. While we were there to see all the new automobile technology, I again paid attention to use of technology on the floor:

  • The Microsoft Kinect 3D controller was being used by both Chevrolet and Ford. Chevy had five booths with a side-by-side racing game to promote the Volt while Ford used it to capture images of attendees passing by a kiosk and then green screen the image over various backgrounds and displayed it. While many manufacturers had touch screen displays for product information, Fiat used Kinect to create an interactive display that was shown on a large display screen for everyone to see (ala Minority Report)

  • Chevrolet had a Camero flanked by cameras that produced 3D images.

  • Toyota had a touch-screen wall with three large video panels that allowed one to explore the Prius. Ford had a touch screen wall that allowed one to create custom paint jobs for the Mustang.

  • Technology/social media seen at past shows such as foursquare and Facebook were still in use. picked on the fact that I had checked in and sent me an @mention to flash their tweet at their booth for a prize. (I didn't catch the tweet)

  • Just two years ago only Kia was using QR codes. This year there were many. I wan't the only one snapping images of codes this year.

  • Flickr continues to be the image service of choice with over 3,500 photo uploads

  • Consuming Delicious Linkrolls


    No. Linkrolls are not a delicious appetizer or a holiday baked good.

    Linkrolls are a way to have Delicious bookmarks displayed as part of a web site or blog posting. Although the ability to create Delicious Linkrolls has been available for over 5-years now, I've only recently began to leverage the service.

    As an example of how they can be used, I've been working on an ePortfolio to track references to my various scholarly communications, appearing both in print and online. After performing various searches for references, I bookmark the ones I find on my Delicious site, making sure to add specific tags. The saved bookmarks can then be searched, sorted, and imported using scripting made available by Delicious. After all the options are set, it is as easy as copying and pasting to get the targeted Delicious links embedded into a blog post or web page.

    To create a Linkroll:

    - Create or log into your Delicious account
    - Add bookmarks with tags
    - Go to
    - Change preferences, including appropriate tags
    - Copy the html code into a web page or blog post. This will import Delicious bookmarks into any post or page.

    The code generated by Delicious does not include any styling , so it may need some style tweaking. The imported content should blend in and adopt the look of your site.

    Are Blogs Given Any Weight in Library Tenure and Promotion Cases?


    I have stated in the past that I feel that blogging is a valid form of scholarly communication in the discipline of academic librarianship. Still the question continues to arise as to whether blogging should count as scholarship or a creative activity in academic promotion and tenure.In "Bloggership, or is publishing a blog scholarship? A survey of academic librarians," Arthur Hendricks (Library Hi Tech, Vol. 28 Iss: 3, pp.470 - 477 DOI 10.1108/07378831011076701 ) details the results of a survey of academic librarians to uncover how much weight their libraries, and/or their parent institutions, place on blogs in promotion and tenure reviews. Of the 67 complete responses, 53.6 percent indicated that their performance review committees do not weigh a blog the same as an article published in a peer-reviewed journal, while only 1.5 percent stated they did.Respondents were asked, “If you consider the above blogs to be scholarly (equal to an article published in a peer-reviewed journal), please describe why.” Answers varied, but one person wrote, “I'm not sure I would say ‘equal to peer reviewed journal’ but as intellectually thoughtful, important, and influential? sometimes. They tend to be more in the formative stage, like a conference presentation rather than the lengthy, substantial, finished nature of a peer reviewed article.” Of those respondents who publish a blog, 57.1 percent indicated that they find other's blogs to be scholarly.Younger librarians are more inclined to think of their blog as counting toward scholarship when compared to older colleagues. Of those 22-30 years of age, 40.0 percent indicated that they thought their blog should count as scholarship, and of those 31-40 years of age, 27.3 percent thought their blog should count. None of those 41-50 years of age indicated that their blog should count as scholarship, and of those over 51, 12.5 percent considered their blog scholarly.From the information provided in the paper, it appears that many of the respondents equate research with scholarship, when in fact research is a subset of scholarship. Scholarship is the creation of new knowledge or organization of knowledge within a new framework or presentation. Scholarship can take the form of a peer-review publication, but it can also be evidenced in other ways such as exhibits, public performances, digital resources, and papers at professional meetings. So, if a blog communicates some sort of new knowledge or the organization of knowledge within a new framework or presentation, or is even seen as a equivalent of a conference presentation, it is indeed scholarship.Criteria for evaluation of any work of scholarship in any form should take into consideration originality, breadth of dissemination, and impact on scholarship and/or practice in the field of librarianship. I would argue that blogs may be having a greater impact in the practice of librarianship than are traditional publications. Blogs have invigorated the exchange of ideas within librarianship and have enabled academics to connect with a larger general readership for their insight and expertise.What was very interesting was that being an article that discusses scholarly blogging it did not include one reference from a blog. If blogs are to be recognized as scholarly contributions, then they should also be viewed as such.See also:The “voice” of academic librarianship [...]

    Key Findings from 2010 ECAR Study of Undergraduate Students and IT


    The Educause Center for Applied Research (ECAR) Study of Undergraduates and Information Technology, is a longitudinal study of students and information technology. The study focuses on what kinds of information technologies these students use, own, and experience; their technology behaviors, preferences, and skills; how IT impacts their experiences in their courses; and their perceptions of the role of IT in the academic experience.

    The 2010 results are now available. It is based on 36,950 respondents from 127 academic institutions.

    Some random key findings:

    - 94% use their library web site for research; 1/3 several times a week or more
    - 90% use presentation software and course or learning management systems
    - 90% use texting or social networking for interactive communications; only 30% use them in courses
    - 89% own a laptop or netbook
    - 80% gave themselves 'high marks' on the skills in searching the Internet efficiently and effectively
    - ~50% own mobile device; 43% of those use them daily for accessing information and/or email
    - 40% use VoIP services, like Skype
    - 36% use web-based productivity software
    - ~33% are online more than 10 hours a week; the same percentage spends between 11-20%; 9% spend more than 40 hours
    - 1.4 % use virtual worlds, like Second Life

    The Go.OSU URL Shortening Service: Agile Development in Practice


    About a year ago, I wrote about the idea of a creating a University-branded URL shortening service. Late last month, a small team that I collaborated with at Ohio State launched such a service, called Go.OSU.

    Right after I wrote my post last year, I created a short project description and shopped the idea around to potential partners. My primary selling point was that the service would leverage the authority of the OSU brand to the shortened URLs that are included in social media, publications, etc. Although I received a lot of positive feedback about the idea, many potential partners were caught up with other projects.

    Since there was so much positive feedback on the idea, this Spring I shopped the idea to the Office of the CIO. While receptive, the OCIO suggested that I build a business case document for them to consider. The idea would be vetted through their review process and, if it past that first level of review, the idea would be placed on a list along with other projects seeking funding. The project would move ahead into development if it received funding, or if other support was identified.

    Although I appreciate the reality that projects at that level of an organization need to fit into processes and workflow, my gut said that this review process could take at least a year. It wasn't a very agile approach for a such a lightweight project.

    Later in the Spring, I was walking through a hallway at a conference on campus when I heard my name mentioned. It was Beth Black, a University Libraries colleague, and Ted Hattemer, from University Marketing Communications, talking about the idea. I jumped into the conversation. In literally 10 minutes, a decision was made that the project would be developed jointly by UL and UMC and that Ted was on board as our (very supportive) sponsor.

    No committee consensus.
    No formal meeting.
    No user survey / needs analysis.

    A few weeks later Jim Muir, a developer on Beth's team, demoed for the project team a working prototype. A few weeks! How agile is that?

    We pounded on it for a couple weeks and sent Jim all our feedback so he could make changes. Ted brought in one of his designers, Jim Burgoon, to assist the other Jim with the interface design. The project didn't move ahead too quickly over that next couple months due to competing priorities. The project regained momentum in the early summer and was finally launched after we addressed security and legal issues.

    Libraries are always talking about the need to be innovative and to take advantage of emerging technologies. Yet, they slow it all down by forcing ideas and fledgling projects to go through formalized systems. As this project shows, innovation is not born in committee and through process. It is born from half-baked ideas and serendipitous meetings and grows by NOT adhering to formal processes or traditional methodologies.

    Shakespeare Quarterly Testing Open Peer Review


    In what now appears to be a trend, the NYT reports that the prestigious 60-year-old Shakespeare Quarterly is also experimenting with open peer review.

    In the forthcoming fall issue, SQ will become the first traditional humanities journal to break away from the traditional closed review system. Another journal, Postmedieval, is planning a similar trial for next year.

    Mixing traditional and new methods, the journal posted online four essays not yet accepted for publication. A group of experts were invited to post their signed comments on MediaCommons. Anyone could add their thoughts as well as long as they registered with the system. In the end, 41 people made more than 350 comments. The comments elicited responses from the essay authors.

    Is Your Twitter Client Ready for June 30th?


    Christopher S. Penn's post Are you ready for the Twitpocalypse? details coming changes to the Twitter API that will impact many of the widgets, sites, clients, and applications one may use to access Twitter.

    Penn reminds us that on June 30th Twitter is ending support for basic HTTP authentication, and requiring that all applications that access Twitter via the API change to OAuth authentication. In short, any application, site, widget, etc. that uses basic authentication (entering your Twitter username and password) will stop working. Any application, site, widget, etc. that requires you to “authorize” an application will continue to work as intended.

    OAuth is a technology that enables applications to access the Twitter platform on your behalf without ever asking you directly for your password. For users, switching to OAuth means increased usability, security, and accountability for the applications that you use every day.

    So, you may want to look up your favorite client and make sure they are set to go with OAuth, or be ready to switch on June 30th.

    Are Most Effective Faculty Contributors “Permanent Associate Professors"?


    I attended the annual address to the University Senate presented by Ohio State's Executive Vice President and Provost Joseph A. Alutto late last week. While I do not normally go to Senate sessions, I was given the heads-up that his address would include a discussion of potential changes to our promotion standards.Although we talk about the need to have balance in their portfolios, the reality is when it comes to promotions to full professor faculty dossiers become a monopod: research. Some faculty have responsibilities that are essential for the organization to succeed, as is the case within the University Libraries system. The work of library faculty involved in e-resources or building digital collections make visible and demonstrably outstanding contributions to the missions of the university. The perception communicate by many faculty is that since such activities are not traditional research they will not given much weight in full professor deliberations.Provost Alutto's observations support this perception:"That leads me to the standards used for promotion from associate professor to full professor. Here I am talking about cases in which the 30- to 40-year compact between university and professor, the thing we call tenure, has already been agreed to and celebrated. Given that this commitment has been made, the next question is what should be the basis for advancement from associate professor (with tenure) to full professor (with tenure)... "One answer, and the one that is most reflected in our formal documents and policies, is “more of the same.” That is, a full professor is supposed to have more publications, greater teaching achievements, and higher service contributions to justify promotion. Wonderfully, for many faculty members, this is exactly the pattern we see played out. They continue to perform powerfully on all dimensions. However, in reality, promotion to professor tends to be based primarily on assessments of the impact of a faculty member’s scholarship in a particular discipline."If one reviews hundreds of such promotion cases, as does any provost, it becomes clear that promotion to full professor tends to be reserved for those whose research impact is clearly superior. The faculty member whose primary impact and distinctive contributions are in the areas of dissemination of knowledge through teaching or service to the university or professional associations will tend to be passed over for promotion to full professor—unless a department can find a way to “fudge” a demonstrated level of research impact."As the Provost points out, this approach is insidiously harmful. It generates cynicism among productive faculty when they realize the “game” being played. This can frustrate productive faculty who contribute to their disciplines and the university in ways other than traditional research. It not only flies in the face of everything we have been told about the need for a balanced portfolio, it also overlooks the need to recognize evolving interests and skills. It tends to exacerbate to dysfunctional levels all differences in perspective about what is valuable, both personally and institutionally.The Provost continues:"Given these observations, I intend to work with faculty and administrative groups to begin focusing on the following:making certain that there are clear criteria for assessing “impact”—whether in terms of research, teaching, or service in cases of promotion to professor; in all such cases, these criteria should involve both quantitative and qualitative measures, most of which will require seeking data from external sources rather than relying on purely internal onesensuring clear identification of the bases for promotion to professor; these might wel[...]

    2010 Medical Library Association Conference Community Portal


    I was contacted the other week by members of the organizing committee for the Medical Library Association Conference, to be held May 21-26 in Washington, D.C. After a few emails and a conference call, I have agreed to help out by coordinating their first annual meeting online conference community portal.

    What is a conference community? What can an 'attendee' expect?

    In essence, the conference community is an online experience being built around MLA 2010 that allows attendees (both in person or virtual) to interact and share through various online social tools. Some of the content on the site will even be made available for association members that are not attending the conference.

    Yes. Agreed. Nothing really new here. Such tools have been in use at conferences for years.

    What IS new for (this) MLA is that many of these existing tools are being pulled together, with new ones added, and branded as the conference community portal.

    We have also been given the green light by NPC representatives to use the portal as a sandbox for attendees to play around with other emerging tools. The specific tools that will be used are still in discussion / development and will be communicated through various conference channels in the coming weeks. Since this approach is all new to all of those involved, we will be making up most of it as we go.

    I would love to hear from MLA members about those neat things you have seen developed for other conferences that we should consider adding to the MLA10 conference community portal.

    Tom Sanville Resigns as OhioLINK Executive Director


    In my in box yesterday morning was the news that Tom Sanville has submitted his resignation as Executive Director of OhioLINK, effective March 31. The OhioLink system has grown and flourished primarly because of Tom's leadership (and by hiring a very talented staff!). When I came to Ohio State in 1992, OhioLINK was still crawling. The OhioLink system consisted of an ILS that included catalog creation and maintenance; the online public access catalog; circulation, interlibrary loan, and document delivery; acquisitions and serials control; and collection development and management. While the OhioLink Library Catalog is still a centerpiece, the system now includes databases, a Digital Media Center, an E-Book Center, an Electronic Theses and Dissertations Center, and the Electronic Journal Center. Just recently, I came across an old 1998 talking points document where I was touting the 1400 electronic journal titles made available to our users via OhioLink, from two publishers - Elsevier Science and Academic Press. (see: Diedrichs, CP. E-journals: the OhioLINK experience) By last year, the Electronic Journal Center contained more than 8,200 full-text research journals (12.2 million articles) from 100+ publishers. Tom's reach went well beyond Ohio. If you are a part of a consortium that licenses electronic journals you should also thank Tom for his service. He was one of the pioneers in the establishment of consortium pricing from publishing giants like Elsevier. It is possible that one of the license agreements you have signed today (did I mention pricing?) grew from Tom's adept negotiation skills. The text of his email announcement:"I have submitted my resignation to Eric Fingerhut, Chancellor of the Ohio Board of Regents, as Executive Director of OhioLINK. This will be effective March 31, 2010. This should ensure a full transfer of my almost 18 years of knowledge and files to other staff.After my long tenure as director, this was a very difficult decision for me but I believe its the right one for me at this time. With the continued evolution of an integrated Educational Technology infrastructure and the OhioLINK staff’s role in it, and with the establishment of key strategic projects as reflected in the OhioLINK Fall 2009 Update, this is an appropriate time to resign my position and to seek new challenges.In accepting my resignation the Chancellor has been gracious in recognizing that “OhioLINK is a wonderful example of the collaborations possible when many institutions work together with the support of the state to share resources and reduce costs. Your formative leadership in this effort is widely acknowledged and greatly appreciated… As educational technology and the format of academic materials rapidly changes, OhioLINK will be called on to play a critical role inexpanding the availability of new resources to all academic institutions and directly to students and faculty.”The job of OhioLINK will never be done. It has been my greatest professional experience to have worked with the OhioLINK community and staff in building a world recognized library consortium. I am grateful for having had the opportunity to serve with so many wonderful colleagues and friends over the years." Thank you, Tom, for sharing your vision and service to the state and the library profession.[...]

    Is a Twitterfarm Pranking the Jester?


    One of the more interesting discussions I have read since rebooting from a long vacation has been the Twitter weirdness uncovered by the Disruptive Library Technology Jester (a.k.a Peter Murray)It all started when the Jester authored a blog post detailing his ALA Midwinter meeting plans. It appears that others have been constructed tweets that consist of his blog post headlines with links back to his postings. This practice has increased dramatically in the past few weeks. The Jester uses a WordPress plugin to inject posts into his Twitter stream. He runs the BackType service to uncover commentary found in other social media sites so they can be added as comments to his postings. It was the BackType service which alerted the Jester to the Twitter updates.In all cases, the Twitter IDs used are unlike those of other spammers. The account names did not contain a string of numbers and had numerous followers. The Jester did notice one thing in common with all of these updates: they came from the Twitterfeed service.The Jester speculates that these users are grabbing his blog post feed using Twitterfeed and are then syndicating it into their own Twitter streams. He has since analyzed this hypothesis and uncovered other blogs which are also being tweeted by others.What is interesting is that the tweets of the Jester's blog post headlines are not spam in of themselves. The shortened URLs to his posts do not redirect the user to spam sites, but instead go to the original posts. However, the Twitter profiles of some of them contain profile URLs for spam.In the comments of the Jesters post, D0r0th34 speculates and Mr. Gunn concurs, that the prank being performed on the Jester sounds like a Twitterfarm, a la Google linkfarm. Mr. Gunn observes:"Many of the follower qualification tools use follower ratios to determine spamminess of followers. Additionally, there’s a parallel in "macro" blogging where presumptive search engine spiders are really just post harvesters. The point is to provide real or whitelisted content for getting around spam filters, increasing pagerank, or making it look like a twitter account has real non-spam content."So, while the Jester's posts are not being used to propagate spam per say, it appears they are being used to make spammer user profiles look, well, a lot less spammy. I guess in a weird way that should make the Jester feel good. His content is good enough to make it around spam filters.Still, it could be troubling for the Jester if the additional posts directed to his blog flag it for banning/autobury by those sites that keep a lookout for those who post too much (e.g. Digg).[...]

    Capturing Employee Ideas: The CDC IdeaLab


    The CDC (Centers for Disease Control and Prevention) is a large government agency with 14,000 full-time, part-time, and contract employees. While headquartered in Atlanta, the CDC has a large geographically dispersed workforce working in 19 facilities across the United States and 54 countries around the globe. Since many the CDC's employees are isolated geographically, the processes of collaboration, communication, and sharing information efficiently and effectively are challenging.The solution to their challenge is what they call the IdeaLab. IdeaLab is a web-based application which CDC employees are encouraged to use to post their ideas, comment on others’ posts, and vote on the quality of the posts and comments. Submissions are attributed and authenticated in real time. Ideas are categorized according to CDC organizational goals, and related ideas are affinity-grouped using tag clouds. A weekly “Bright Idea” highlights a submission that has broad agency interest across multiple national centers and offices. All communications are stored in a searchable archive that anyone at CDC can review at anytime. IdeaLab enables CDC employees to "use their insights and experiences to help colleagues build and implement high-impact solutions to important public health challenges."The CDC hopes that the IdeaLab will: Increase connectivity of CDC employees who support multidisciplinary, evidence-based solutionsPromote scientific crowd-sourcing and peer-2-peer networking to build ideas, enable virtual piloting and refinement of ideas, and foster rapid implementation and adoption of the best ideasFoster retention and sharing of institutional memoryImprove interactions among networks of knowledgeImprove orientation for and assimilation by new employeesAccelerate health impacts by increasing employee-driven innovation and improving organizational efficiency[...]

    Harvard: Computers in Hospitals Do Not Reduce Administrative or Overall Costs


    (image) Harvard researchers recently released the study Hospital Computing and the Costs and Quality of Care: A National Study, which examined computerization’s cost and quality impacts at 4,000 hospitals in the U.S over a four-year period.

    The researchers concluded that the immense cost of installing and running hospital IT systems is greater than any expected cost savings. Much of the software being written for use in clinics is aimed at administrators, not doctors, nurses and lab workers. Additionally, as currently implemented, hospital computing might modestly improve process measures of quality but does not reduce administrative or overall costs.

    The researchers also found no reliable data support claims of cost savings or dramatic quality improvement from electronic medical records.

    The researchers did acknowledge that the modest quality advantages associated with computerization were difficult to interpret since the quality scores reflect processes of care rather than outcomes. Access to more information technology may merely improve scores without actually improving care by facilitating documentation of allowable exceptions.

    From the paper:
    "We used a variety of analytic strategies to search for evidence that computerization might be cost-saving. In cross-sectional analyses, we examined whether more computerized hospitals had lower costs or more efficient administration in any of the 5 years. We also looked for lagged effects, that is, whether cost-savings might emerge after the implementation of computerized systems. We looked for subgroups of computer applications, as well as individual applications, that might result in savings. None of these hypotheses were borne out. Even the select group of hospitals at the cutting edge of computerization showed neither cost nor efficiency advantages. Our longitudinal analysis suggests that computerization may actually increase administrative costs, at least in the near term."
    Himmelstein, D., Wright, A., & Woolhandler, S. (2009). Hospital Computing and the Costs and Quality of Care: A National Study The American Journal of Medicine DOI: 10.1016/j.amjmed.2009.09.004

    What Technology? Reflections on Evolving Services EDUCAUSE Report


    I just finished reading What Technology? Reflections on Evolving Services, a report from Sharon Collins and the 2009 EDUCAUSE Evolving Technologies Committee. For the first time, Information resource management technologies for libraries were featured as an evolving technology.

    From the report:
    "The increasing primacy of highly distributed digital resources has brought disruptive change to the way libraries must approach their work to remain relevant to their parent organizations and constituencies."

    "Organizing content to support research and learning is at the heart of the library's institutional role. Once limited to applying subject terms, co-locating physical materials, and producing research guides, this role has been changed by the volume and variety of online resources, which require new tools to more effectively meet the needs of users. A growing collection of technologies and tools can be used to more granularly organize, customize, and personalize the online information environment to fit professional, learning, and research activities."

    "These technologies are evolving away from being strictly stand-alone tools and resources and are converging into a more interoperable, collaborative, enterprise-level information management environment — one more closely integrated with teaching, learning, research, and administrative systems. Underlying system architectures are focusing more on providing discrete services (service-oriented architecture) rather than monolithic systems, enabling more interoperable and customizable workflows."

    "By combining discrete services with cloud storage and cloud-enabled applications, institutions can build collaborative work environments between libraries as well as between libraries and non-library units, both on and off their home campuses, for discovering, acquiring, describing, and managing all types of resources. Layered over this enterprise-level resource management environment, information discovery and management tools are providing individuals and workgroups with much more intuitive and productive ways to discover, manipulate, incorporate, and share information for teaching, learning, and research, allowing users to shift time from the mechanics of managing specific resources to a focus on analyzing the information itself."

    NSF Funded Workshop on Scholarly Evaluation Metrics


    A one-day NSF-funded workshop entitled "Scholarly Evaluation Metrics: Opportunities and Challenges" will take place in the Renaissance Washington DC Hotel on Wednesday, December 16th 2009. The 50 available seats were filled the day that the workshop was announced. I would have loved to be in attendance, given my role as a P&T chair, but I heard about it four days after the announcement.The focus of the workshop is the future of scholarly assessment approaches, including organizational, infrastructural, and community issues. The overall goal is to: "identify requirements for novel assessment approaches, several of which have been proposed in recent years, to become acceptable to community stakeholders including scholars, academic and research institutions, and funding agencies."Panelists include Oren Beit-Arie (Ex Libris), Peter Binfield (PLoS ONE), Johan Bollen (Indiana University), Lorcan Dempsey (OCLC), Tony Hey (Microsoft), Jorge E. Hirsch (UCSD), Julia Lane (NSF), Michael Kurtz (Astrophysics Data Service), Don Waters (Andrew W. Mellon Foundation), Jevin West (UW/, and Jan Velterop (Concept Web Alliance).A summary of the goal of the workshop:The quantitative evaluation of scholarly impact and value has historically been conducted on the basis of metrics derived from citation data. For example, the well-known journal Impact Factor is defined as a mean two-year citation rate for the articles published in a particular journal. Although well-established and productive, this approach is not always best suited to fit the fast-paced, open, and interdisciplinary nature of today's digital scholarship. Also, consensus seems to emerge that it would be constructive to have multiple metrics, not just one. In the past years, significant advances have been made in this realm. First, we have seen a rapid expansion of proposed metrics to evaluate scientific impact. This expansion has been driven by interdisciplinary work in web, network and social network science, e.g. citation PageRank, h-index, and various other social network metrics. Second, new data sets such as usage and query data, which represent aspects of scholarly dynamics other than citation, have been investigated as the basis for novel metrics. The COUNTER and MESUR projects are examples in this realm. And, third, an interest in applying Web reputation concepts in the realm of scholarly evaluation has emerged and is generally referred to a Webometrics. A plethora of proposals, both concrete and speculative, has thus emerged to expand the toolkit available for evaluating scholarly impact to the degree that it has become difficult to see the forest for the trees. Which of these new metrics and underlying data sets best approximate a common-sense understanding of scholarly impact? Which can be best applied to assess a particular facet of scholarly impact? Which ones are fit to be used in a future, fully electronic and open science environment? Which makes most sense from the perspective of those involved with the practice of evaluating scientific impact? Which are regarded fair by scholars? Under which conditions can novel metrics become an accepted and well-understood part of the evaluation toolkit that is, for example, used in promotion and tenure decisions?I look forward to the twitter stream..[...]

    Have Life Science Researchers Removed Themselves from the Mainstream Library User Population?


    A report Entitled Patterns of Information Use and Exchange: Case Studies of Researchers in the Life Sciences has been released by the British Library and the Research Information Network.The report was developed by capturing the day-to-day patterns of information use in seven research teams from a wide range of disciplines. The study, undertaken over 11 months and involving 56 participants, concluded that ‘one-size-fits-all’ information and data sharing policies are not achieving "scientifically productive and cost-efficient information use in life sciences"Skip past all of that and jump to page 47 of the report. There, they state (I'll let the report speak for itself) :"Conventional university library facilities rank low as a vehicle for accessing published information. The traditional role of professional information intermediaries has been largely replaced by direct access to online resources, with heavy reliance upon Google to identify them. Given the limitations of generic search engines such as Google, measures to reconnect researchers with IIS professionals could bring improvements in information retrieval, and benefits to the research process."Researchers also tend to use services that have been ‘proven’ by colleagues, or to interrogate websites they regard as authoritative and comprehensive in their field. When they use such services, researchers tend to take the results on trust: the specificity and the breadth of the information retrieved do not appear to require further enquiry."The result of all these developments is that many life science researchers have removed themselves from the mainstream library user population. They do not even use the library catalogue. Library-based services can replace the services researchers do use only by demonstrating that they can improve retrieval capability, and deliver results within a timeframe that corresponds to researchers’ own patterns of work. This is a significant challenge when researchers are driven by a desire for immediate online access to specific resources of interest, at a time convenient to them, and from a known and trusted source."Overall they found that the groups that they studied use a narrow range of search engines and bibliographic resources, for three reasons:• lack of awareness and time to achieve or build a broader suite• the ‘comfort’ that comes from relying on a small set of familiar resources, usually endorsed by peers and colleagues, and• the cost in time and effort needed to identify other resources, and to learn to use them effectively. They detail what would appear to be emerging roles of the library in a researcher's information seeking patterns: "The challenge for institutional information services is thus to develop and provide online services geared to the needs of their research groups and thereby to add value to the research process, facilitating the use of new tools, providing individuated professional support, as well as advice, training and documentation on a subject or discipline basis. Any such strategy would have to be proactive: as noted by our regenerative medicine group, researchers are reluctant to adopt new tools and services unless they know a colleague who can recommend or share knowledge about them.""Library and information service providers in the higher education sector need to come to a clearer view of their structures and roles.. some of our groups expressed a desire for better portals and tools to identify the information resources relevant to researchers wo[...]

    A Need for University Branded URL Shortening Services?


    Twitter users are quite familiar with URL shortening tools as a way to include web links within their 140 character limit. URL shorting is is the process of taking a long URL and turning it into, well, a short one. For example, instead of using the long URL of one can use the shortened URL of Shortened URLs are extremely useful in Internet conversations such as forum threads,IM chats, etc. They are also essential in communication channels where there is a limited to specific number of characters, such as with Twitter. Shortened URLs can also be useful when reading long URLs aloud to customers over the phone, adding URLs to print materials, and when showing them on video displays or during presentations. Shortened URLs are also easier to enter into a mobile device. There are many services that create shortened URLs, most notably OCLC was ahead of this game way back in 1995 with their PURL ( Persistent Uniform Resource Locators ) service. While the goal of PURL was to allow content providers to redirect users as content moves from site to site, it did so using shorter URLs.  The mechanism for resolving a shortening URLs is simple: The browser is directed to the shortened URL site. That site performs an HTTP redirect of the address and the browser is sent to the registered long URL. The URL shortening service maintains the master table of redirects. One problem is that all the shortened links die when such free services die, as almost did in August '09. As a result, members of the academic community that rely upon such services will eventually lose access to their shortened links. This will require reentering the URLs into another service, which might also die. Another concern with existing shortening services that the URL domain plays an important role in identifying the authority of a Web resource. Shortened URLs lose their link and organizational information. All brand/name recognition - the authority of an organization - goes away since the domain is hidden within the shortened URL. One needs to click on the shortened URL and visit the redirected site before discovering the domain's authority. An example of where short URL branding works is with Flickr. Each photo page also get a shortened Flickr URL. The domain is owned and operated by so the shortening service will be as reliable as the Flickr service. When someone goes to the site they know they will get to a Flickr photo page, not a redirect to a site containing malware.It therefore makes a lot of sense than academic institutions consider building their own URL shortening services as a way to brand and create authority with their shortened URLs. One University that has done just that is the University of Nebraska-Lincoln. Wayne State University also appears to have such a service. I would love to see a local shorting service. If I had the programming chops, I would write it over the next weekend. I know. It's easier to start a shortening service than it is to maintain it in perpetuity. Yet, creating an in-house URL shortening service not only helps to promote and support the institutional brand, it lessens the chance that the institution's carefully crafted custom links will not die if the third-party goes down, or out of business. [...]

    Ohio State President Calls for Tenure Changes


    Thank You President Gee!At his annual presidential address yesterday afternoon, Ohio State University President E. Gordon Gee thinks it's time for faculty members to be evaluated on the quality and impact of their work.New faculty members at Ohio State University Libraries enter as assistant professors and have six years to build up their record of scholarship, teaching and service. They receive performance evaluations every year, a fourth-year comprehensive review, and in their sixth year undergo a more vigorous examination to see if they measure up to the level of performance required for tenure and a promotion to associate professor. Library faculty can then choose to undergo an additional review later in the careers to attain the rank of full professor.President Gee said in his address that professors should be rewarded for their talents and should be encouraged to work with academic departments outside their own. Instead of using an arbitrary formula for evaluation, he would like OSU to create a system in which faculty members are judged on the quality of their work and their impact on students, their disciplines and the community.This is exactly the position I have been advocating not only on this blog, but in discussions with my library faculty colleagues. Even though I have articulated to colleagues all the points that Gee highlighted, inertia has indeed won out.From his prepared remarks:Let me state this directly: We must change our recognition and reward criteria.Since I returned to Ohio State two years ago, I have made this point a number of times. Changing the way we define scholarship, appreciate new forms of engagement, and properly reward superb teaching can be this University’s signal differential.If we do not properly and tangibly value those activities, our efforts to extend our resources more fully into our communities will be stymied. We must take it upon ourselves to revise the centuries-old equations for promotion and tenure and develop new reward structures.Without a doubt, this is a nettlesome issue. And I am not the first person to raise it. Ernie Boyer articulated the case nearly 20 years ago in a speech here on campus. And of course he did so very persuasively in his 1990 book, “Scholarship Reconsidered,” in which he called for “recognition that knowledge is acquired through research, through synthesis, through practice, and through teaching.”At Ohio State, and at colleges and universities across the country, we have long had faculty committees devoted to looking at revising promotion and tenure standards. And yet, the status quo remains. Inertia is winning.I believe we must finally speak aloud the truth: that some arbitrary volume of published papers, on some narrowly defined points of debate, is not necessarily more worthy than other activities.Ladies and gentlemen, this University is big and strong enough to be bold enough to judge by a different standard.We can dare to say, “No more,” to quantity over quality.We can stop looking at the length of a vita and start measuring its true heft.This University, finally, can be the first to say, “We judge by a different standard.” And let others follow our lead, if they wish. I sit here thinking, what if OSU Libraries HAD acted a year ago and began to change our criteria? Would we have been included in President's speech as the leaders of where the University should be heading? Would that have raised our visibility on campus? As a pro[...]

    Process of Tenure and Promotion a Monster That Eats Its Young?


    The approach that Kathleen Fitzpatrick has taken with her new book manuscript might be one possible path that the future of scholarly communications will take.

    Ms. Fitzpatrick has made the manuscript of Planned Obsolescence: Publishing, Technology, and the Future of the Academy available online for open peer review. The 'book' is a part of Media Commons Press, who's tag line is "open scholarship in open formats."

    While the plan is for the manuscript to go through the traditional blind peer-review process, and is forthcoming by NYU Press, Fitzpatrick plans to incorporate reader comments from the online manuscript into her revisions. She asserts:
    "One of the points that this text argues hardest about is the need to reform peer review for the digital age, insisting that peer review will be a more productive, more helpful, more transparent, and more effective process if conducted in the open. And so here’s the text, practicing what it preaches, available online for open review."
    Not only is the process being used to write the manuscript exciting, the manuscript is as well. A couple parts of the text which relate to the academic rewards system:
    "our institutional misunderstanding of peer review as a necessary prior indicator of “quality,” rather than as one means among many of assessing quality, dooms us to misunderstand the ways that scholars establish and maintain their reputations within the field."
    "we need to remind ourselves, as Cathy Davidson has pointed out, that the materials used in a tenure review are meant in some sense to be metonymic, standing in for the “promise” of all the future work that a scholar will do (“Research”). We currently reduce such “promise” to the existence of a certain quantity of texts; we need instead to shift our focus to active scholarly engagement"
    "Until institutional assumptions about how scholarly work should be assessed are changed — but moreover, until we come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining “value” without all that inconvenient reading and discussion — the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works."
    "I want to suggest that the time has come for us to consider whether, really, we might all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, and by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place."

    Peer Reviewers Get Worse, Not Better, Over Time


    Almost all peer reviewers get worse, not better, over time.

    So suggests a study presented at the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver, Canada, and reported by Nicola Jones in the October 2009 issue of Nature. In his paper "The Natural History of Peer Reviewers: The Decay of Quality" Michael Callaham, editor-in-chief of the Annals of Emergency Medicine in San Francisco, California, reported his analysis of the scores that 84 editors at the journal had been given by nearly 1500 reviewers between 1994 and 2008.

    The journal routinely has its editors rate reviews on a scale of one (unsatisfactory) to five (exceptional). The average score stayed at roughly 3.6 throughout the entire period. The surprising result, however, was how individual reviewers' scores changed over time: 93% of them went down, which was balanced by fresh reviewers who kept the average score up. The average decline was 0.04 points per year.

    As quoted by Jones, Callaham said "I was hoping some would get better, and I could home in on them. But there weren't enough to study." According to Callaham, less than 1% improved at any significant rate, and even then it would take 25 years for the improvement to become valuable to the journal.

    Jones also notes that Callaham agrees that a select few senior advisers are always very useful. But from his own observation, older reviewers do tend to cut corners. Young reviewers assigned a mentor also typically scored half a point better than non-mentored colleagues, but when the mentor's eye disappeared after a year or so, the advantage evaporated.