Tuesday, April 8, 2014

Downloading and opening EBSCO e-books on a Google Nexus tablet

Wow, it's been ages since I last posted here. My minute or two of fame (it didn't even count as 15 minutes) scared me off for a while, but I decided to briefly resurrect this blog to post some instructions I just wrote up. They may turn out to be helpful to other people, or they may be helpful to my own library if we ever decide to start pushing the downloadability aspect of our e-books.

So, last night I woke up and couldn't get back to sleep for a few hours. Since my brain wouldn't shut up, I decided to put it to use on a problem I had been wondering about for a while: how to get my library's EBSCO e-books to work on my tablet.

I already knew that some of them, at least, were downloadable. I had previously managed to check one out, download it onto my work computer, and open it using Adobe Digital Editions. So I knew downloading one and opening it on my tablet should be doable.

Well, it took an hour, but I figured out how to do it. These are very brief instructions: no screenshots and, in most cases, no descriptions of exactly where you need to click. The tablet I used was a Google Nexus 10. I have no idea how these instructions would need to be modified for anything else, but I'm fairly certain iPads would need to use different software than what I'll be listing.

What you need:
  • Adobe Digital Editions username and password
  • EBSCO username and password
  • Opera (a browser)
  • Bluefire Reader (a Secure Adobe PDF reader)
Get your Adobe Digital Editions and EBSCO usernames and passwords before you do anything else.

Opera and Bluefire Reader are both free and can be downloaded via the Google Play store. After installing Bluefire Reader, start it up and authorize it with your Adobe Digital Editions username and password.

Open Opera and select an EBSCO e-book. Check it out – you'll need to log in with your EBSCO username and password in order to do this. Download it. It will seem as though nothing has happened. In the top left corner of the tablet, it should say that a new download has completed – I think the file extension is .ACSM. Click on that notification. It will ask how you want to open it. Select Bluefire Reader. Bluefire should instantly authorize the file via Adobe Digital Editions and then allow you to open the book.

Some notes:

In theory, you should be able to use any browser you want and any e-book reading app capable of reading Secure Adobe PDFs. In practice, that doesn't seem to work. I originally tried downloading a couple books using Chrome and opening them in my preferred Secure Adobe reader, Mantano, and I kept running into walls.

I still can't get Mantano to properly open those e-books, even after having opened them in Bluefire Reader and knowing it should be possible. Since I've used Mantano to open Overdrive Secure Adobe files, I have to assume that this is some quirk of EBSCO's. Bluefire Reader was the app EBSCO specifically recommended after I checked out an e-book.

As far as Opera goes, I tried that after coming across this website (thank you Pasco-Hernando State College Library!), which mentions that Chrome and Android don't seem to play well together where EBSCO e-books are concerned and recommends that Opera be used instead.

ETA: When I tried to use Chrome to download the e-books, Bluefire Reader kept telling me that I had "no tokens" and couldn't open the book.

Wednesday, March 9, 2011

What do you mean, you don't have metadata experience?

(This has been in my drafts for a while. I finally decided to publish it. Maybe one of these days I'll publish all the other drafts I have sitting around.)

A few weeks ago, a cataloger who thought she might soon be unemployed emailed one of the listservs I subscribe to. One of the things she wrote really caught my attention: “The few cataloging jobs that I saw required metadata experience, which I don’t have…”

I could say who wrote this and when it was written, and the post can be found in this particular listserv’s archives, but since I don’t know if this person would be ok with her name also being in a blog post, I’m not including it here. The important thing is simply that she wrote this, and that this is not the first time this has come up.

Offlist, I emailed her about this statement, saying that, actually, she does have metadata experience. MARC is a metadata standard. In fact, it’s a very complicated and unintuitive metadata standard – lots of fields and subfields. The average person, looking at a list of MARC fields, would probably not be able to immediately equate, say, “245” with “Title statement,” and yet for many catalogers who regularly use MARC it actually becomes more comfortable to see fields and subfields rather than plain English labels.

If you know MARC, you have metadata experience. You may not have experience with Dublin Core or EAD or ONIX or XML or whatever else, but you do have metadata experience, and you can apply what you have learned from MARC to learning another metadata standard. (This, of course, takes an employer who is willing to train you or give you the time to get trained - which is a valid worry, what with all the employers who seem to want new employees who can be put to work with little or no training.)

Not everyone writes or talks about MARC and metadata as if they are two completely separate things, but it has still somehow been embedded in some people’s brains that they are separate things. On listservs and in blogs, I have read complaints from people that catalogers aren’t very good at recognizing transferable skills. The mental separation of “metadata” and “MARC” is, I feel, one of the main reasons why this is so, and it cripples catalogers and makes them afraid. Skilled catalogers don’t think they are qualified to be metadata librarians (or whatever else they are called). They think that what they do is becoming obsolete.

I do believe that, one day, catalogers will probably be using something other than MARC. However, I don’t live in fear of my future and my career(*)…because I believe there will always be a place for someone who can create and edit metadata. I can learn a new metadata scheme if I need to. After all, I learned MARC.
If you'd like to know more about MARC's place in the metadata world, check out "A Visualization of the Metadata Universe." Actually, this shows you the place of 105 metadata standards - it's awesome and kind of pretty.
* - I do worry that, one day, there won't be any jobs for people like me in libraries. I could work for a corporation if necessary, and almost ended up at one during the course of my post-grad school job search, but I'd prefer to work for a library. With all the outsourcing that's going on, however, that may not always be possible.

Wednesday, February 23, 2011

The blog hasn't died, really it hasn't

I know, I haven't updated this blog in ages. I figured I'd make my first update in such a long while an update about some of the projects I've been working on, either on my own or with Tracy:
  • Fixing up the diacritics problem in the Classical Music Library and Contemporary World Music MARC records. There are still no diacritics (I'm not sure if would be possible for our OPAC to display those diacritics, anyway), but at least the stuff that looked like gibberish and was unsearchable and unreadable by human beings has been dealt with. Tracy and I worked for an hour and a half on this problem ("Diacritics Hell"). Neither one of us had any idea it would take quite so long.
  • I'm still working on adding 505 fields (contents notes) to records. Right now, I'm concentrating on PS647-PS648 (in our General Stacks area). We've got records for anthologies in this area that don't say what titles or authors they include, meaning that the only way someone would know that, say, History Revisited has a story by Harry Turtledove is through some source that is not our catalog (the record for this book now has a contents note, by the way).
  • The records to which I'm adding 505s get other special treatments, which is why I only do a few per day, or there wouldn't be any time to do anything else. To list a few things I do: add our holdings to OCLC if they're not already there, fix up the title control numbers, and add genre/form headings (which are actually just LC subject headings used in 655 fields - for now, until there are more genre authority records and I'm actually able to load those authority records, this seems to be the best way to go).
  • I'm cleaning up 505 notes that already exist in our catalog. Some of them are enhanced when they shouldn't be, which creates false hits in our title searches. Some of them are unenhanced even though they should be enhanced, which means things that should come up in a title or author search don't. I can fix it globally, but the fix is imperfect, so I have to at least glance at the records before I reload them, slowing the process down a bit.
  • I'm fixing 245 fields (Title) that have subfields in an incorrect order. The incorrect order creates display weirdness that sometimes makes it incredibly difficult to figure out what the actual title is, or see that it's the audiobook version instead of the print version. So far, I've done our vinyl records and VHS tapes, which probably took care of the worst offenders.
  • I'm still loading authority records in order to authorize our headings and make it easier to keep them up-to-date. It's a slow process that I will never finish.
I think that's it, not including all the projects that have ended up on the back burner.

Friday, January 21, 2011

Something cool - Amazon to MARC and IMDb to MARC

The Amazon to MARC converter takes information from Amazon.com book records and turns it into MARC. I don't see myself using the MARC records this produces, because the records would take so much cleanup that it might actually be easier to start from scratch, but I still think it's pretty cool. Plus, some aspects could be useful for my work: I could copy and paste summary information from here and avoid (I'm pretty sure) having to hunt down quotation marks and apostrophes that Connexion doesn't like, and I could potentially use this as a starting place for call numbers and subject headings.

The IMDb to MARC converter (prototype) takes information from IMDb and turns it into MARC records. I think this converter's output is actually even more helpful than the Amazon to MARC converter's - video recording MARC records take a lot of work, because of all the name access points and various notes. This would take care of some of that work, although there'd still be a lot of fixing and fiddling to do. I love the "verify names" feature (also present in the Amazon to MARC converter). I could see this tool being especially popular with libraries that, in order to save time, have a policy of basing video recording cataloging on container information - this would probably help them save even more time. Again, as with the Amazon to MARC converter, I probably wouldn't use the MARC records produced by the IMDb to MARC converter, but there are still certain things I could copy and paste into the records I end up using in Connexion.

UPDATE: The Amazon to MARC converter doesn't just do book records - I just had it generate a record for a DVD, VHS, and CD. The "classify" information seems to be drawn from OCLC - too bad, I was hoping it could help Tracy and Trudy in those cases where they have trouble finding OCLC records that match the Contemporary World Music records they're assigning call numbers to.

Wednesday, January 5, 2011

Electronic theses and dissertations

I added 115 URLs and their corresponding e-resource item records to bibliographic records for theses and dissertations today. In theory, every thesis and dissertation for which we have a bibliographic record and that is available via Proquest should now be searchable as Type: Thesis/Dissertation, Location: Online Access. Yay!

While I was at it, I also cleaned up some stray issues in the records, and added abstracts to records I was editing that did not already have them.

Friday, December 10, 2010

Newest project, with an observation

Now that I've finished the project that flipped subfield d's and q's into their correct order and got rid of all or most of our obsolete subfield w's, I've got yet another project on the table. I realized that, even though I can't export records based on publication year, I can export them based on record creation date, which should usually be within a year or two of the publication date. With this is mind, I'm hoping to add and (where necessary) enhance contents notes in bibliographic records added to our catalog in the past five years. All of the below is being done using MARCEdit, by the way.

So far, the first step is going well. I exported the records, extracted all the ones that have 505 fields (contents notes) with " / " in them, extracted all the ones in that file that don't have 520 fields (summary, etc. notes), and globally enhanced all the 505s. However, not all of these 505s really need enhancing, and there's some potential for error in how enhancing occurred, so I'm going through the records one-by-one before reloading. This is still going a lot faster than enhancing them all individually would have gone.

I haven't had much of a chance to work on this project, because of all the end-of-semester stuff that's been happening, and because cataloging new things needs more attention right now. Still, I've looked through enough records to discover something I hadn't realized: our NetLibrary records, which I had always assumed were the highest quality ones, sometimes have contents notes that end prematurely. The contents notes might only cover half the actual contents of the book, with no indication (via the first indicator) that these fields are in any way incomplete. I'm fixing them up as I come across them, but it makes me wonder what other kinds of problems there might be that I don't know about.

Monday, December 6, 2010

"You say you want a revolution"

This is a bit of a rambling post, but the general topic is RDA. That seems to be all anyone ever talks about in the cataloging world anymore. Not surprising, really.

There are a lot of complaints about RDA being voiced on the OCLC-CAT listserv, of all places. Why OCLC-CAT? I'm pretty sure it started because of the way OCLC has been allowing RDA data (authority and bibliographic) to be added to the WorldCat database.

When I originally heard that RDA would be tested before the Library of Congress made any decisions about it, I assumed that that test would take place outside of the live cataloging environment. This has not been the case. The word "test" in OCLC Land sounds an awful lot like "the rules have officially changed, deal with it." OCLC has instructed catalogers to treat RDA bibliographic records vs. AACR2 bibliographic records the same as they treat AACR2 vs. AACR bibliographic records: if an RDA record already exists, an AACR2 record would be considered a duplicate and is therefore not supposed to be entered. Catalogers not using RDA may edit the record back to AACR2 locally.

How exactly does this make sense? I would understand if RDA were the official new rules, but they're not, at least not in the U.S. I know that there are countries that have decided to implement RDA already, and WorldCat is an international database. However, couldn't OCLC just instruct catalogers to treat RDA vs. AACR2 records as parallel records? For instance, if an RDA record already exists, catalogers still using AACR2 (which is most of the U.S.) could enter an AACR2 record, thereby giving other AACR2 users the ability to share the work rather than having every AACR2 user edit the RDA record locally. When/If RDA is implemented by the Library of Congress, OCLC could set their deduplication software to consider RDA and AACR2 records for the same title as duplicates, but it makes no sense to do so before the end of the supposed test.

The bigger uproar on OCLC-CAT right now seems to be focused on authority records. I will admit to not understanding everything everyone is saying - the complaints seem mainly focused on the way RDA information is being added to authority records (RDA name headings live in 700 fields right now, with the AACR2 name headings still in 100 fields - no information has been given on what will be done to these records if RDA is implemented). Having RDA name headings in 700 fields doesn't hurt DSL, but, from what I've heard, there are libraries whose authority control systems choke on this. What does worry me about all of this is that, like the bibliographic records, these changes are all happening to live records: this is not a separate authority file just for the use of those testing RDA, but rather the authority file used by everyone, regardless of whether or not they are test libraries. In effect, non-test libraries are being forced to take part in the test. How can this still be considered a test if everything is happening in a live environment?

The uproar about the way OCLC has been handling the RDA test resulted in Memorandum Against RDA Test, a petition that has so far been signed by 312 people. Although I agree with the petition, I don't always agree with the strong wording that Wojciech Siemaszkiewicz, the person who I believe started the petition, has been using on the OCLC-CAT listserv when talking about RDA. Siemaszkiewicz has an unfortunate tendency (unfortunate because it immediately gets RDA supporters backs up and occasionally even alienates those who oppose RDA) to phrase complaints about RDA in ways that bring war protests and the rhetoric of revolution to mind.

Siemaszkiewicz isn't the only one stirring things up - Deborah Tomaras, on the OCLC-CAT listserv and others, has encouraged those who are against RDA to send their concerns to the personal emails of the members of the RDA Coordinating Committee. She even provided all the email addresses in case the website with those email addresses is taken down. While I can understand the frustration that resulted in this particular call to action, since it feels as though complaints and concerns about RDA and the RDA test have fallen on deaf ears, I'm also not comfortable with what Tomaras is asking catalogers to do. I don't really know what catalogers who are against RDA should be doing, since going through the proper channels has so far seemed ineffective, but spamming/harassing the individuals on the RDA Coordinating Committee isn't, to my mind, the way to go. Can we all just please remember that we're supposed to be professionals?

I may not be sure how I should be communicating my concerns about RDA, but I do have concerns, and one of them is whether or not a drastic reorganization of cataloging rules is even necessary. I recognize that there are problems with AACR2 - I rarely catalog any of the formats (such as databases and websites) that are difficult to catalog with AACR2, but, when I do, it's painfully clear that something needs to be done. At least, something needs to be done to the rules for electronic resources and other things with similar cataloging problems. As far as I'm concerned, the cataloging rules are fine for most physical materials.

Let's be clear about this: the cataloging rules are different from the encoding standards, which are different from ILSs. One of the things that consistently frustrates me about the RDA arguments is that there seems to be an assumption on the part of those who are most in favor of RDA that most of our cataloging problems reside in our cataloging rules. I would argue that this is not the case.

Maybe I need to keep a list of every catalog wish list item I am asked to implement that I can't, in addition to the reason why I can't. I'm pretty sure that, most of the time, when I can't implement something it's because of the way MARC is set up or the way our ILS works, not because of AACR2. If AACR2 is the reason why something can't be done with MARC or something in our ILS is not doing what our users (whether they're students, faculty, or librarians) want, and if that were plainly stated to the cataloging community, I would happily accept a change to the rules. However, I don't agree with change for change's sake, and that's what RDA feels like. On the one hand, RDA is supposed to make everything better. On the other hand, it's supposed to not change things so much that AACR2 records can't live side by side with RDA records. I don't see how both of those statements can be true.

So, that's it from me for now. I don't know if those who are most against RDA will ever be able to reconcile with those who are most for it - neither side really seems to understand the other, or maybe they're just not willing to listen to each other. Or even talk to each other (it seems like pro-RDA talk may be happening on Twitter a lot - I wouldn't know, since I don't use Twitter, but I may have to start just to see what's going on - while anti-RDA talk is concentrated on listservs). Another problem seems to be that not all ILSs are created equal and that not everyone understands this. But then, I may just think that because I'm in the camp that believes our largest problems lie in our ILSs and MARC 21.