2014-11-12

The Charleston Conference felt bigger than ever this year, with multiple attendees in the halls and elevators commenting on the profusion of programs at multiple venues, the standing room only grounds for popular breakout sessions, and the fact that they could no longer count on seeing everyone they know among the other attendees in the course of the conference. It is equally impossible to see even a fraction of the many compelling programs presented during the event; below is only our impression from the handful we could personally attend.

Show me the data

While MOOCs remain present on the agenda, the buzz from last year seems to have largely died down. Instead, “data” is the word of the moment. One iteration is the library’s role in helping researchers meet a growing number of mandates from research grant funders. In “Building Capacity in Your Library for Research Data Management Support (Or What We Learned from Offering to Review DMPs)” William Cross and Hillary Davis of NCSU Libraries explained how the library was able to help faculty improve their data management plans, without hiring a dedicated data librarian, by developing a committee of people who each had various aspects of the necessary expertise and the departmental relationships, learning from each other, and then cycling members in and out. The team worked in Google Docs to allow quick, asynchronous edits and focused on “bite sized improvements” such as suggesting participation in Dryad or Create Commons licensing, highlighting security concerns, and connecting them to other campus resources.

Data pervaded the program in other ways as well, from the Charleston Seminar on Data Curation to Advanced Data Analysis: From Excel PivotTables to Microsoft Access to The Signal and the Noise: Libraries and Institutional Data Analytics, including documenting the library’s role in student success. Open Access for Data Sets addressed connecting underlying datasets to published papers. The many, many sessions on collection analytics, altmetrics, and “evidence based” collection development showed the library profession’s increasingly sophisticated use of its own data to drive decision-making at an increasingly granular level. In particular, a huge ballroom filled to—if not beyond—capacity to hear “To Boldly Go Beyond Downloads” testified to the hunger to understand how articles are being used in the real world. Speakers Gabriel Hughes, VP of Web Analytics and Usage Research, Elsevier; and Carol Tenopir, Professor at the School of Information Sciences, University of Tennessee, Knoxville, presented on a project to find out how faculty are really using articles, based on an inference that, as article sharing gets easier, the proxy relationship between downloads and actual reading behavior is diverging. The project is still underway, based on interviews, focus groups, an international survey, and data from formal sharing platforms (public groups only, owing to privacy concerns). So far, preliminary findings indicate that sharing is up, but is a nuanced behavior, with relatively few sharing to a worldwide audience (and then, usually the paper’s author, after uploading to a repository). Most sharing occurred within a small working group, followed by sharing to other researchers and to students, and while many shared with all three audiences they used different platforms to do so. Most were entitled to the content, but used article sharing for the convenience of mobile and remote work. Formal academic platforms were a minority of the methods used, which also included Dropbox, Google Docs, and of course, email and social media. “The big question,” said Tenopir, “is, is a COUNTER-like measure possible?” Finally, Data Visualization in the Library addressed how to communicate the story the numbers are telling to stakeholders.

Text messages

Another hot topic was open educational resources, AKA open-access textbooks, and the creative use of course reserves to supplement or replace the traditional text. Once considered largely not a library bailiwick, the rising cost of textbooks for students, and the resulting large number of students who either take fewer courses or take them without reliable access to the required text, is increasingly a problem libraries feel called upon to help solve. In “From Course Reserves…to Course Reversed? The Library’s Changing Role in Providing Textbook Content” Nicole Allen, Director of Open Education, SPARC; Charles Lyons, Electronic Resources Librarian, SUNY University at Buffalo; and Bob Nardini, VP, Product Development, Ingram Library Services cited statistics that say textbook prices have risen more than 80 percent in the last decade; two out of three students have foregone buying a required text, and one out of two have taken fewer courses due to the cost of textbooks. To address these concerns, Allen says, libraries can take three paths: create new open textbooks, as Rice University did; tackle the issue through public policy, as the state of Washington has begun to do; and/or participate in sharing existing resources, such as through MIT Open Courseware. Other ways to support OERs include collecting reviews of such texts, as the University of Minnesota does, and partnering with college bookstores to draw attention to existing library-provided resources. Meanwhile, Lyon pointed out that studies which claim students prefer print to etextbooks are often done in a vacuum. His own student population, he says, shows that when the etext is half the price of print, the numbers flip and 70 percent prefer electronic. Nardini highlighted the potential of custom textbook creation by each professor, such as via Ingram Construct, to disrupt the traditional textbook market. In the meantime, he said, aggregating the purchases, such as has been done in Brazil, can lead to dramatic savings.

Further panels on Libraries Leading the Way on the “Textbook Problem,” Supporting Student Success: Purchasing Textbooks for Reserves, and User-Centered Collection Development: A Textbook Purchasing Pilot Project highlighted other case studies and alternative approaches.

It’s the economy

Though “library as publisher” can be one solution to the dilemma of rising text prices, and the trending concept has matured enough to rate its own preconference, few of the attendees we encountered were focused on putting on the publisher hat. Instead, they were tackling some of the classic core aspects of collection development as publishers’ customers: how to make the right tough choices when limited budgets come into conflict with rapidly rising prices for ejournals and, in some more recent cases, scholarly ebooks.

Economics equally underpinned both of the extremely popular sessions, added at the last minute, on what caused the recent bankruptcy of major subscription aggregator Swets and what former Swets customers should do now. In “Swets: What Is Going On In Our Industry?”, Dan Tonkery, former president of Faxon, attributed part of the failure to the fact that Swets had been owned by a venture capital firm which, he said, sold off all non-subscription units to provide a dividend, including testing and hosting. The result, he said, is that cost cutting in technology staff meant that Swets was poorly positioned to develop and execute new value added services—a view that was later corroborated by a former Swets customer commenting from the audience. As the shift from print to tech and from agents to direct sales, and the rise of consortia, eroded margins for subscription agents—the top 30 publishers, he said, have dropped from selling 90 percent through agents to only 47—Swets was not able to make up the difference with new offerings.

Former Swets customers were advised to pull reports from Swetswise, stop sending payments ASAP, and not to honor any invoices without proof that Swets had paid the publisher. Tonkery said he knew Swets had not paid many 2014 orders, and a librarian in the audience volunteered that even some of her 2013 orders had not been paid. If they had prepaid already, librarians were advised to contact the individual publishers, since some are honoring the subscription even if they didn’t receive the money, so long as payment was made before bankruptcy was declared. Going forward, Tonkery suggested that librarians pay close attention to vendors’ financial stability, following credit and annual reports where available, and when there is credible information that there is a problem to be proactive, rather than waiting for the troubled company to advise them. For those with big dollar volumes in prepayments to protect, he mentioned the possibility of performance bonds; more commonly used overseas, these can add two or three percent to the cost. Meanwhile, Tonkery advised publishers to contact their subscribers to make sure they’ve selected another method, whether another agent or a direct sale; to monitor renewals in January and February; and to set liberal terms for expiration dates on their hosting platforms where possible. The biggest problem, he said, will be consolidation orders, where publishers don’t know the address of their ultimate customer.

Even the ambitious re-envisionings of the panel on alternative models for scholarly monographs—“What’s the Big Idea? Mellon, ARL, AAU, University Presses, and the Future of Scholarly Communication”—were ultimately rooted in a recognition of economic scarcity, of the current model of funding scholarly publishing hitting the limits of its ability to scale effectively to meet the need—in this case, the need to publish enough books to help scholars doing good scholarship get tenure, even if their work does not appeal to a wide enough book buying audience to keep university presses breaking even. Presenter Charles Watkinson, Director, University of Michigan Press and Associate University Librarian for Publishing, University of Michigan Library, cited a sobering statistic: the University of Michigan Press, he said, saw a decline in revenue of one third over the past five years due to cuts in library monograph budgets.

Instead, an AAU-ARL Task Force on Scholarly Communication proposes a first book subvention model—a partial subsidy of publication costs—which would involve colleges paying to publish the first book of scholars they employ. The idea is that schools would benefit from the increased prestige of their scholars and those that cannot support a whole press of their own would nonetheless contribute. The books would be available via open access in electronic form, though presses would be free to sell a value-added electronic version as well as a print one. The Mellon Foundation proposed to provide seed money for a similar, though perhaps broader, intervention, not necessarily restricted to first books. Though the foundation has funded two grants already to enable three institutions to do planning and a walkthrough of who would be eligible, where the money would come from, and how it would be allocated, Helen Cullyer, Program Officer, The Andrew W Mellon Foundation, stressed that this is all very preliminary and the foundation might yet decide not to go ahead based on the outcomes. The goal, panelists say, is to decouple the evaluation of books for tenure with the evaluation of their commercial potential, since a groundbreaking work of scholarship might still only appeal to a small niche audience. To do so, subventions would need to cover not only first copy costs to publishers but the opportunity cost of having to give up publishing something else.

One concern, however, is where such a system would leave adjunct professors, independent scholars, and those overseas, who may already be disadvantaged by open access journal publishing, if monographs also move from a pay to read to a pay to publish model. Cullyer suggested that scholarly societies might step in to help fund adjuncts and other gaps.

(In informal conversation after one panel, some attendees even felt it was time for a radical solution to the ultimate scholarly scarcity—tenure track positions. With fewer than 20 percent of positions at some schools being tenured, and that number falling, they felt it was time to scrap the system entirely and start over.)

Assessing the legal climate

The Long Arm of the Law panel is as much Charleston Conference tradition as the Hyde Park Debate (this year titled Resolved: Wherever possible, library collections should be shaped by patrons, instead of by librarians). This year’s panel reviewed some of the legal developments impacting intellectual property, including the recent reversal of the Georgia State e-reserves case; the ReDigi finding that reselling ebooks doesn’t fall under first sale because it involves copying; the short-lived attempt to make law students return their Aspen Casebooks for pulping instead of reselling them; and the European Union’s recent decision that search engines must remove certain results due to a “right to be forgotten.” Other, less well known cases referenced included White v. West, Fox v. TV Eyes, and Swatch v. Bloomberg. The trend in all these cases, according to Laura Quilter, Copyright and Scholarly Communications Librarian, University of Massachusetts Amherst, is that adding metadata is considered transformative in the fair use analysis, as in indexing. More broadly, she said, the HathiTrust ruling, among others, points to an analysis in which transformativeness is not required for something to be considered fair use, but is only one way of getting at the question of whether it is being used for a different purpose or audience.

All in all, despite realism about the restrictions imposed by the law, the technology, and the budget, Charleston attendees seemed full of optimism that with information and ingenuity, librarians can overcome their challenges to deliver more of what their users want and need.

Show more