Offical blog of the Ontario Library and Information Technology Association

Why I think faculty and librarians should not host their work on or

This is an *evergreen* tweet of mine:

A gentle reminder: is a privately owned company funded by venture capital groups that will expect profit from it

— Mita Williams (@copystar) June 12, 2015

When this tweet is re-found and re-tweeted, it’s usually followed by people following up with questions or challenging what I said.

Hm. As opposed to the academic journals from Springer, etc., which are… ?

— Dave Gray (@davegray) November 18, 2015

@ruebot @mrgunn @captain_primate @copystar isn’t Twitter the same? Would I stop tweeting my articles on Twitter? Trying to understand

— ℳąhą Bąℓi مها بالي (@Bali_Maha) November 18, 2015

So I thought I’d summarize some of the reasons why I think faculty and librarians should not host their academic work on or is not an educational institution

@Bali_Maha @mrgunn @captain_primate @copystar Twitter doesn’t have an edu tld.

— nick ruest (@ruebot) November 18, 2015

“ is not a university or institution for higher learning and so under current standards would not qualify for the EDU top level domain. The domain name “” was registered in 1999, prior to the regulations which required .edu domain names to be held by accredited post-secondary institutions. All .edu domain names registered prior to 2001 were grandfathered in and not made subject to the regulation of being an accredited post-secondary institution” [Wikipedia,, November 20th].

Commercial Repositories use dark-arts user design to encourage the uploading of articles that frequently are not under license of the author

@Bali_Maha @ruebot @mrgunn @captain_primate True, but not in an IR while aca. edu is designed to profit from individual culpability.

— Mita Williams (@copystar) November 18, 2015

Institutional repositories admittedly have some pretty bad user interfaces. But it’s understood that some of the unpleasant friction that comes with uploading your research into your university’s repository is because your institution will not automatically publish uploaded material without assurances that a publisher’s right is not being infringed. Commercial repositories have disclaimers that express that they are also concerned that copyright is not being infringed, but the extreme ease by which a user can re-publish articles formally published elsewhere betrays the strength of this concern. continues to design services so slick that users don’t realize that they have triggered them, such as their Sessions feature which they launched and then disabled in May of this year. Also, services like appear to be designed to cannibalize traffic from your official point of publication.

Selective enforcement from publishers keep universities from providing similar services that commercial repositories are trying to fill

@mrgunn while for profit companies like Aca .edu and Mendeley are allowed to grow. This is what I mean by selective enforcement of copyright

— Mita Williams (@copystar) November 18, 2015

We need to resist the narrative that commercial repositories are filling a market need that libraries and universities have refused to pursue. We have wanted a more social and inter-connected interface to research for some years now.

But when libraries and universities have responsibly hosted published research articles under fair user / fair dealing and have restricted use to classroom participants in Learning Management Systems (such as Blackboard) or library Course Reserve Systems we have been pursued and sued by publishers. In the Canada, we have had to deal with Access Copyright and the US, libraries have been following The Georgia State Copyright Case with much concern.

In conclusion, this is my new “evergreen tweet” about

@mrgunn @copystar for-profits exploiting selective enforcement to free-ride off work funded by non-profits? I’m okay w/calling that “bad”.

— Nancy Sims (@CopyrightLibn) November 18, 2015

The City As Classroom

On Wednesday morning, I had the pleasure to the give the opening keynote to the Wisconsin Library Association Annual Conference.

My name is Mita Williams and for the last 16 years or so, I’ve been working at the University of Windsor in Windsor, Ontario. If you don’t know where that is, Windsor sits just across the river from the city of Detroit as you can see from this map. Some years ago, I played a game that challenged the player to ‘map their life’ for points. On screen is a map of my life circa 2008.

In 2014 I had the privilege and the pleasure to have a year’s sabbatical from work. During that time, I read, and wrote and volunteered and otherwise explored a variety of themes and I am grateful for this opportunity this morning to share with you some of what I’ve learned that year and how it might fit into a context of librarianship and more importantly, into our communities.

The title of my talk is taken from the book pictured behind me: Marshall McLuhan’s City as Classroom. It was published in 1977 and was the last book of his career.

Please be aware that I am not a media ecologist. I have a degree in Geography and Environmental Science and I have never taken a single course from Communication studies. But I can say that I have read several of Marshall McLuhan’s works and have read this biography about the media theorist by Douglas Coupland of Generation X fame, and I highly recommend it if you too need help trying to understand how a frumpy Canadian professor of renaissance rhetoric turned into a media celebrity for his scholarship.

I’m interested in McLuhan’s work for a number of reasons. The largest reason is that I am, like so many of us, constantly trying to make sense of what it means to be a digital citizen of the global village and it was McLuhan who warned us that electronic media would change everything around us and about us long before most.

McLuhan also briefly lived in Windsor Ontario when he taught at the precursor of the University I work at now. In fact, the photo on screen is evidently taken from McLuhan’s time at Windsor’s Assumption University.

And if your eye-sight is super sharp, you might see that underneath the words, Marshall McLuhan on that magazine cover are everyone’s favourite words ‘The Future of the Library.’

According to the work ‘McLuhan in Space: A Cultural Geography’, as far back as 1957, Marshall McLuhan said he believed that because the electronic information explosion was just so massive and so powerful, most learning happens outside of the classroom. The City as Classroom follows up on this theme. It’s a fascinating book and was written for an audience of high-school students.

That being said, I have to confess, every time I pick up the book, I’m actually a little disappointed because this book does not contain the answers I’m looking for. No, the book is true to its pedagogical praxis and is largely filled with questions and difficult questions, at that.

This is from the first page of the book:

Is school supposed to be a place of work? Is the work done by the students, or the staff, or both? Look up the root meaning of the word school (schola <  Greek σχολή ). When you are at school are you separated from the community? If so, are you separated physically or in other ways?

And that is a good question to ask one’s self when at school. It’s also a good question to ask about the work that we do.

But for my talk today, I’m not answer that question directly. Instead, I’m going to explore the territory that might lead us to the answer to that question.

During this talk, we are going to explore how we might embed the:

  1.     library / librarian in the community
  2.     collection in the community
  3.     community into space/time

So let’s begin:  How do we embed the library/librarian in the community?

For most of our existence, the obvious answer to the question, “How can a library system can increase its presence in a community?” has been the establishment of a branch library. It’s important to remember while there are people who prefer the larger, grander spaces of the Central Branch and its greater choice of materials, for others, a library branch within walking distance is their ideal.

But the obvious answer of more library branches isn’t so obvious anymore. In my own community, my public library is facing a budgetary shortfall and so recently the city council proposed that some library branches be closed including the branch in the poorest neighbourhood of the city. When there was an outcry about this loss of service, the city suggested that the neighbourhood be served by a bookmobile instead.  This begs the question, is the bookmobile an equivalent to a branch library?  And if it’s not, why isn’t it?

There is some urgency to this question.

Despite the evidence that libraries are very well used, many communities are cutting back on the budgets of their library systems and thus cutting back on the hours and branches of their libraries. And interestingly, while *public* library branches were scaling back or even closing,  *People*’s Libraries – such as those in the temporary autonomous zones of the many Occupy camps sprung up, as well as a variety of other civic and urban interventions such as the Branch project in Brooklyn.

And as we are all aware, Little Free Libraries have also proliferated thanks to our well-intentioned neighbours in our community. There are two such Little Free Libraries within three blocks of where I live. I sometimes peek inside if I have the time to be curious, but I never visit a Little Free Library when I want to read a new book, and I think that’s the key to the experience.  Now I don’t people who have a Little Free Library want to replace libraries. If anything, they want to celebrate books and community and reading and just want to contribute to the neighbourhood’s ‘gift economy’ and as such these libraries don’t upset me much.

But when my non-librarian friends start sending me links to projects such as the JetBlue Book vending machine project – a project designed to distribute free books to the children the in ‘book deserts’ of Washington DC, well, that’s when start feeling nervous about ‘free book projects.’

Indeed, it is worth considering the natural extension of this particular kind of service as The Pioneer Library System in Norman, Oklahoma has done. This branch library is essentially a self-contained vending machine that features books, DVDs, audiobooks and even acts as a WiFi hotspot. And earlier this year they’ve added functionality for users to transfer ebook holdings from Overdrive to their personal devices.

But there have also been responses from libraries on the services end of the spectrum of the work we do. There are libraries and librarians taking a page out of the urban tactics playbook and going to where the people are by having a regular or a temporary presence at festivals, farmers markets and the like. And by presence I mean, specifically the presence of library staff.

And libraries such as the Cleveland Public Library have gone further with the pop-up library through their Literary Lots program, a program that “brings books to life” by transforming vacant lots in Cleveland into temporary educational spots for children. 

One of the public library programs that I’ve been tracking is something called the How to Festival. These festivals are sometimes small in scale while other times they are really quite ambitious like the festival pictured here: 50 things in 5 hours.

One of the reasons what particularly attracts me the “How to festival” model is that, many times, it is designed to involve many non-profit groups and even business partners and in doing so, it celebrates and shares the knowledge that is embodied in the community and found in the residents themselves.

Now, could libraries extend and embed these ‘How to festivals’ into the community in a more persistence model? And if you could embed single ‘how to’ sessions, could you then build on them to create a curriculum?  One possible model that tries to do this very thing that I’ve discovered and have been trying to learn more about is the Cities of Learning project. The City of Learning project – from my understanding – involves or has involved Dallas, Philadelphia, Los Angeles, Columbus, Washington DC, and Chicago.

The Cities of Learning project emerged from the Chicago based City of Learning project, which itself grew out of the city’s 2013 Chicago Summer of Learning. In that particular project, more than 100 youth-serving organizations including the Chicago Public Libraries and Mozilla joined together to make single program that allowed the youth of Chicago to earn digital badges as recognition of the achievements of fulfilling creative and volunteer activities.

This project is also the result of the support of the Connected Learning Alliance, which is in turn supported by the McArthur Foundation.  I mention this because it’s important to know that the creative and activity based programming is not an accident but is at the heart of its design.
Unfortunately, I don’t know much more about the Cities of Learning program. I don’t know how successful the program is or whether it achieves the ambitious goals it sets for itself. I will tell you that I love the idea of connecting volunteer and creative activities in a thoughtful way that brings youth to libraries, museums, galleries, and community organizations. That being said, when I see badges sponsored by Best Buy it does give me reason to pause. And perhaps that’s unfair of me as businesses and large corporations are part of all of communities.

I’m also curious is whether the youth involved are motivated by intrinsic or extrinsic motivations and how the badge structure helps in bridging the gaps between the two. I do believe that it is possible to create a structure that involves points and badges in a way that encourages participation and creativity without actually meaning anything. And that’s because I’ve played a game that is not dissimilar to Cities of Learning. It’s called SF0.

SF0 was originally designed to be a game to be played in the city of San Francisco but really the game can be played anywhere.

“SFZero is a Collaborative Production Game. Players build characters by completing tasks for their groups and increasing their Score. The goals of play include meeting new people, exploring the city, and participating in non-consumer leisure activities”

The map on my first slide of my talk was my entry for the task ‘Map your life‘. This is my entry for the task, ‘Leave clues’.  If you complete a task, do get a certain amount of points. But if you go above and beyond mere completion of the task and you delight other players with your entry, they can assign you additional points. The points, of course, don’t mean anything and there are no winners in the game and the game never ends. You just keep playing and exploring.

At this point, I’d like to move to part two: exploring how we may embed our collections into our community.

And to do so, let’s continue our journey from the city as classroom to the city as playground.

This is a screen from the massively multiplayer ‘augmented reality game’ called Ingress. Ingress is a territory capturing game between two sides – the Enlightened and the Resistance – who battle over ‘portals’ which are only visible from your smart phone. In the real world, the portals are usually sculptures, landmarks, or historical monuments. The screen behind me shows the sculpture garden along the Riverside park that hugs the Detroit river, with the blue team (which is the Resistance) establishing lots of captured territory south of the river and team green (which is the Enlightened faction) dominating downtown Detroit.

I have lots of friends who love Ingress and love how it draws them outside and encourages them to explore new places in search of portals and they have even found comradery through Ingress. For me, I found none of those things and I think the game is terribly boring (Ingress fans: fight me).

Mobile devices have long passed the time of being ubiquitous and yet there are still very few games like Ingress. And that’s probably for a number of reasons. For one, it’s very expensive to run a game that uses the real world as its game board and that’s because the real world is big and there needs to be a lot of human intervention in its game-layer construction. Ingress was possible because it was bankrolled by Google although they recently split from the company who designed the game, Niantic Labs. Also, in order to play the game in the real world, players need to not only have a smart phone, they need one with a generous data plan. Furthermore, because the phone must routinely use GPS for location, the game is a battery vampire. Its battery draw is so considerable, there is a co-branded Ingress portable battery that is for sale.

While Ingress does not thrill me, the latest game yet to be released from Niantic Labs does. It is another augmented reality game and it’s funded by Nintendo and it is called Pokémon Go. According to the promotional videos, the game will allow players to capture Pokémon ‘in the wild’ and to battle other players. Now you have to understand where I’m coming from – I spend about 3 hours a week at the local game store so my kids can play competitive Pokémon with others in the city and I have a fondness for the game and that particular gaming community.

And so, of all the electronic wearables that we’ve hyped for- Apple watches, Google glass – the one that I’m probably most likely to buy is this one. This wearable means that you can be notified when you are near such a Pokémon without having to be looking at your cell phone. It’s also an adorable hybrid of a Google map’s location pin and a Pokeball. You gotta catch them all!

Now while my kids are obsessed with collecting Pokémon cards, my mother has an altogether different obsession. When she travels, she is likely to carry this book. It’s her bible. And she has a lifelist of all the unique birds she’s seen. Yes, my mother is what the Brits would call a ‘twitcher’ otherwise known as a bird watcher, or as she tends to call herself, a birder.

When it comes to birding, I’ve seen firsthand how new services build around mobiles devices have augmented the book and I’m going to say it – has made printed field guide obsolete. Most comprehensive birding books are heavy and bulky to carry whereas, you can carry a birding app on your phone. These birding apps allow for a multitude of images for each species, as well as maps that express normal habitat and migration routes, and also feature sound recordings of bird sound which is crucial tool for bird identification. But the feature that makes making current birding apps a bird of another feather is what is known as ‘ebird integration.’ 

eBird is a free online program that allows birders to report and share their birding observations with their friends and their friends at the Audubon and the Cornell Lab of Ornithology. This capturing and sharing of time-stamped and geotagged sightings provides a multitude of new services and benefits to both scientists and birders. And it more profoundly – ebird – unlike any print book – can help you find birds recently spotted in the wild. ebird changes the way you bird.

This ability to link the real world with the digital is still unfolding around us. There’s much talk about the internet of things but those explorations tend to focus on goods within our homes like thermostats, lamps and garage doors.  That being said, there are some interesting explorations that show that, just like ebird, the virtual/real connection holds a potential to completely reframe our relationship with objects and organizations the real world. In my hometown, there are a group of citizens who have advocating the city government to make public the live, GPS-driven locations of the city buses available to the public as they do in many other cities such as Detroit.  

A relationship changes fundamentally when you no longer think of where an object should be and you start thinking about where an object *is.* Just think about arriving yourself at a bus stop with no bus in sight even after five minutes the scheduled stop. Wouldn’t you like to know where your bus is?

In order to embed our collections into our communities, we need to explore how we can link the real world with the digital. QR codes for all the derision that they still attract, have not yet been replaced with anything else that does the job better. Or I should rephrase that – there are better ways of embedding information into spaces but these haven’t been widely adopted yet. For example, Peter Rukavena, the hacker in residence of the University of Prince Edward Island has embedded a little digital library locally by installing a Piratebox on a street lamp in Charlottetown. The Piratebox provides historical digital objects from the university library’s Islandora collection as well as some other Creative Commons licensed material.

The challenge of course, is that you have to be aware that there is a Piratebox (or Librarybox) in the neighbourhood in order for you to find and connect to it in to take advantage of the information provided. It’s as if we can’t escape the historical plaque as a means to provide context to the outside world

That being said there is a technological space where there is massive potential for augmenting the real world with supplemental information and that is already being expressed with the surprisingly little discussed product Google Goggles. If you have an Android phone and you install and activate this app, Google Goggles will run an image search on every picture you take and it has an uncanny ability to discern and identify objects from features in the landscape… it can also identify books from their cover, it can give you consumer information from barcodes, and can even provide language translation from non-English scripts.

Now, this being a Google product, it is impossible to know whether Google Goggles in a crucial component of how the company envisions the future of search or whether the service will be discontinued tomorrow.

I’m going to return to Google shortly but before I do, I think it’s also important to point out another contender that our users also tend to turn instinctively to in order find the context of things they don’t know much about and that’s Wikipedia. If you’ve been following the work of the organization, you know that Wikipedia has been investing in the retooling of their site so that it works well in a mobile environment. In fact, you can download the Wikipedia app now and use it to discover Wikipedia entries that are nearby (these are the ones near my house) – at least the ones that are nearby that have been appropriately geocoded with longitude and latitude coordinates.

So we have two potential services that can help us find more information about the world immediately around us: one through Google and one through Wikipedia. One is a completely closed system driven by advertising. The other is committed to being ad-free, user generated and user supported.

But use of Wikipedia as a means to express community information to the community is not entirely problem free. The largest problem is that the fact that simply being and existing is not enough for having a presence in Wikipedia – entries must past the muster of ‘notability’ to the editors of Wikipedia.  Even if you are Jimmy Wales and you write a stub of an article about a butcher/restaurant you’ve visited, you are going to have an editor questioning the presence of that entry in the site and deleting your page due to a lack of notability.

There are many struggles that are inherent within Wikipedia and the one that I find is particularly interesting is the ongoing battle between of the Inclusionist and the Deletionist factions of the editor core. The issue of notability is particularly problematic because it means that Wikipedia – if it’s not mindful – can end up perpetuating systems that already tend not to extend notability to groups such as women.

Wikipedia is also biased towards media vs reality. Every single episode of any cartoon is going to be considered to be notable enough for inclusion. It’s for this reason why there are so many more porn stars in Wikipedia as compared to female scientists.

It is essential that we extend this scrutiny to Google especially as Google Maps embeds so much of our understanding of the world around us. It is important to foster a mindfulness towards what is found in Google maps and more importantly, a consideration of what is missing. If you’ve ever traveled with small and restless children, you may have done what I have done and try to search for a nearby playground when traveling. And perhaps it is only then when you realize that public playgrounds are features that are generally not expressed in Google maps unless they are businesses.

It is not generally understood that Google maps are fundamentally different from what we generally think of as maps, which are objective representations of space. But the Google map that I see is going to be different from the map that you see depending on what google knows about my searching habits, and perhaps even my habits to where I like to go out to eat.

We tend to think of maps as representations of real and definite things and maps in general are not particular strong in presenting uncertainty. Whenever national territory is contested, Google makes it a policy to appease both sides by showing you the map that you would most likely want to see

This is a story in which a journalist discovers that a particularly affluent neighbourhood in Hollywood that was frustrated by all the tourists coming through their hood as a way to approach the famous Hollywood sign managed to weigh enough of an influence that the directions in Google Maps now tell tourists to walk an hour and a half away so you can see the sign at a distance from the Griffith Observatory, instead of using the trails in the nearby park as a means to see it up close.

It’s as if you can control the map, you can control the territory

I don’t have many stories like the one of the Hollywood sign but it is enough to give pause, especially as we consider that it wasn’t long ago when the express mission of Google was to have all us all wearing Google Glass so that directional and location information would be served to us directly into our vision.

Another consideration to keep in mind is that we are judged not only by who we are but where we are from – this is a particularly click-baity headline but the story behind it is still an interesting one: some companies price their products depending on what demographic information your zip code betrays about you.

What happens when Google gets to decide what you call your neighbourhood? What happens when the real estate industry has a vested interest in extending the boundaries of neighbourhoods that are gentrifying at the expense of your own.

Even though we all now know that we live in the age of BIG DATA, we really don’t know how high the stakes are and it might be many years before we realize how the maps of today shape the territories of tomorrow.

We must not forget that from the 1930s to the 1960s, African Americans were effectively cut out of the legitimate mortgage market due to a banking practice called red lining.  Redlining is named after the practice of outlining on a map in red where a good or service would not be extended to a neighbourhood based on the people who lived there. As you can see, the redlining maps made by banks in Detroit show a pattern that is eerily similar to the maps on the right, of present day poverty levels in the city. It makes one pause to consider what algorithms of today will have also have generational effects.

I remember one particular lecture during my undergraduate degree when my professor introduced our class to the idea that while we can occupy the same places, we – men and women of various ages and backgrounds and orientations – all experience these spaces differently and so they shouldn’t be considered as equivalent.

That statement is, in one way, completely obvious and self-evident. But this observation can still be a complete revelation to people who may be less aware how others in positions of various vulnerabilities move through space in a way in order to minimize aggravation or harm to their person. Behind me on the screen is an image of the Negro Traveller’s Green Book which was a travel guide for African Americans during the time of the 1930s to the 1960s to help them specifically navigate their own country.

We all navigate the world differently depending on who we are. Are we in a wheelchair? Are we vegetarian?  Do we need a private place to pray during the day? Are we a dad who needs a bathroom with a changing table? Are we frequently mistaken for another gender? Are we a mother who wants to nurse her child in a private space that’s not a bathroom? Are we homeless and in need of a place to clean up?

For all our apps on devices, every city still needs some exploration to give up its secrets or a community to let you in on them.

How can the library help our communities make their place their own?

Last section! Let’s explore space/time.

There was a time – not that long ago, in which many a library reference desk would have its own set of reference sources. Sometimes these would be collections of facts captured on reference cards as pictured behind me, or sometimes they were developed into what we called The Vertical File.  Sometimes these collections were for library staff; sometimes we made them readily available for the public.  Many times, these collections were deeply local and the reason why they were maintained by libraries, was that no one was doing this work and this was work that served the community’s information needs.

One of the things that I find so completely and utterly perplexing about librarianship is that we’ve seemed to give up this practice.

One of the projects that I wish more libraries would consider supporting is the LocalWiki project.  Like Wikipedia, the wiki is a grassroots effort to collect, share and open the world’s knowledge. But unlike Wikipedia, LocalWiki’s goal to capture a place’s local knowledge to anyone be able to learn about where they live — their local government, the history of their neighborhoods, the schools, the social services such as food banks— every facet of life in their community. If you are interested, please note that the Ann Arbor District Library has had some involvement in the work with the Ann Arbor Localwiki project.

There are, of course, alternatives to capturing local knowledge. Several slides back, I featured the Wheelmap website which seeks to capture and share places with wheelchair accessibility. One of the reasons why chose to showcase that particular project is that the information that is shared on Wheelmap also gets add to OpenStreetMap for other organizations to download and reuse and add to their own maps.

OpenStreetMap is a little known but frequently used project that is essentially, a Wikipedia for maps.  I understand that the notion of a map that anyone can change is fundamentally unsettling to many people, but if you use apps such as Foursquare, Pinterst, Github, you’ve already seen and used OSM.  If you would like to learn more about OpenStreetMap and/or Web Mapping, allow me to plug this three part webinar series from ALA by Celcily Walker and myself called Re-Drawing the Map.

I believe in everyone in librarianship should learn a little bit more about geospatial data because I believe that there is a slow – what academics would call –  spatial turn happening in the profession.

 Organizations like the New York Times frequently present their data journalism as a map because they know that the map is a visualization that allows their readers to immediately hone in on the place and context that means the most to them. Our readers and our researchers could enjoy the same benefit.

On the screen here is the map interface of a photography collection that’s been digitized from York University using the Leaflet JavaScript library

These, I believe, are some of the first steps towards a future in which we can imagine one day being finding relevant historical documents and images based on where one is standing.

There is much work to be done for such a future to come about. The amazing people of the New York Public Library are attempting to build “civic infrastructure” called the Space/Time directory. The space time directory will be a map with layers and a time slider as well as a discovery tool that turns the city itself into a library catalog.  The data produced by the data will be placed in the public domain and the project is being built so that others will have the ability to build Space/Time directories for their own city.  They write, “The NYC Space/Time Directory will make urban history accessible through the kinds of interactive, location-aware tools used to navigate modern cityscapes.

It will provide a way for scholars, students, and enthusiasts to explore New York City across time periods, and to add their own knowledge and expertise.”

Perhaps this is the city as classroom we’ve been waiting for.

And just in case you thought it was safe not to talk about the future of libraries…. I’m so sorry.

In either 1976 or 1978, a year before or after the City as Classroom, Marshall McLuhan paired up with Robert K Logan, a physics professor from the University of Toronto, to write a book on the future of libraries (because as we know from so many think pieces as of late, the one thing you never do to when you want to know about the future of libraries is to actually talk to a librarian).  That work was never published and the only excerpts I’ve seen of it online are from an Australian art magazine called Island. Of that excerpt, less than 500 words were excerpted online. From that, I’m going to end my talk of three sections with three quotations or ‘McLuanisms’

This is a strong counter idea to the literal law that we’ve taught that which is that the library is a growing organism.  We might have to find new forms to thrive in our evolving niche.

Libraries need to have better control over the flow and storage of our information we provide for our communities if we want to see that information become embedded within our communities. We need to have systems that allow us to add geospatial data to allow for spatial discovery. We may also want to create the civic infrastructure that would allow our community to learn from and share with each other through us.

I included this quotation not to cast shade on those who are involved in management or who use data to make better decisions but because it brings me relief to know, that for all the foresight that Marshall McLuhan had about electronic information was going to change everything, he was still very confident that the library would remain as an important part of human culture.

And what I love about this particular quotation, is that he reminds us that the libraries mission is to service inspiration and creativity, which is something I know you have all done and will share during the next three days of the WLA conference.

Libraries are for use. And by use, I mean copying.

On Tuesday, I had the pleasure to speak to the good people of ILEAD USA. The words below are the notes that I brought to the stage with me. If you want to hear what I actually said, during my talk you can watch the talk via YouTube.

My name is Mita Williams but because it’s October I’ve changed my name on Twitter to something Hallowe’en related but you can still find me there and in many other online places as copystar.

I am going to start with a statement of disclosure. I use copystar as my IRC nick and Twitter handle because years ago I learned there was a Japanese photocopier company called mita copystar. And so, even though today I am going to be talking about copying and the library, I am not a financial benefactor of the photocopier industry.

And I’m not going to be talking about the legalities of photocopying in the library. Instead I’m going to be exploring this particular idea: the use of copying as a means of collection development.

Now I think it’s safe to say that as librarians, we don’t tend to think about collection development in this way —  we buy materials or subscribe to them — which I think is interesting because arguably the most famous library in the world was built from copies.And piracy. Literally piracy.

The Great Library of Alexandria became great because it was meant to be great and it was funded enough to be so. Copies of scrolls from far and wide were acquired by purchase but were also acquired using more dubious practices.  Of note, ships entering the harbour of Alexandria would be searched for scrolls and these would be seized, brought to the Great Library where a copy would be made, and the Great Library of Alexandria would keep the original.

I would like to ask you why, in this world in which we can hit Control A, Control C and Control V ( otherwise known as Select text, Copy text, and Paste text) and copy a book in just three keystrokes, why don’t we have a Great Library of Alexandria of ebooks now? Why do we still look backward in time, instead of forward, when we think of a collection of the all the most important written works that the world has ever seen?

Depending on your level of fluency when it comes to the legal framework of ebooks, you may or may not know these are the bad guys that are standing in the way of digital preservation and our future library of Alexandria : DRM and DMCA

In order to better express what I believe might be happening in our day and age, I made this flow chart. On this slide I’m trying to describe the circle of life of print books: an author writes, a publisher prints and sells, a library buys and shares, a reader reads, a reader writes…  it is a thing of beauty (the process, not my chart).

Now, as I’m Canadian, I’m not as familiar with US law as my own. For example, we make sue of Fair Dealing whereas you guys speak of Fair Use.  So I have to rely on sources such as the good people of ALA to let me know that the reason why libraries are allowed to lend print books in the first place is because of something known as the First Sale Doctrine. The gist of which is this: if you buy a print book, you can re-sell, rent, or lend the book to someone else without having to acquire permission from the copyright holder. 

But as librarians we all know that the rules around ebooks are fundamentally different. The parameters of what you can do with an ebook are not governed by the First Sale Doctrine and instead, set by a license agreement between the you and the publisher.  Again, this text is from the ALA’s “Libraries Transforms” site:

The usual e-book license with a publisher or distributor often constrains or altogether prohibits libraries from archiving and preserving content, making accommodations for people with disabilities, ensuring patron privacy, receiving donations of e-books, or selling e-books that libraries do not wish to retain.

So as I mentioned before, in Canada, we have something that called Fair Dealing which has established that you can copy and use some of an ebook for the purposes of research, private study and teaching.

This is great if you are an instructor at a university and you would like to provide your students with a copy of an essay from an anthology.  It’s great, that is, unless your library has signed a license that trumps Fair Dealing and instead establishes that the contents of the ebooks in question cannot be copied and shared as such and can only be linked to in a course reserve system or learning management system.

And copying a link from the ebook platform is somehow, perhaps coincidentally, absurdly difficult to do. Now the library is the position that it needs to communicate to faculty how to find a permanent link to books at a chapter level and how to add an ezproxy prefix to said link if that link is to be added to the Learning Management System and … and at this point, no one can’t even.

The most egregious example that I know of this is the Harvard Business Review, who a couple years ago, took the top 500 articles and said that if you want to do anything else than read the article – including the ability to directly link to said articles – colleges and universities would have to pay an additional fee – which has been said to be the five figures for at least one institution.

Many institutions have refused to pay the ransom for these 500 articles and have to opted to keep their print subscriptions. Clearly, we need more than read-only access to library materials, but it’s unclear where that line gets drawn from library to library. How much should the ability to print an item cost? How much is the right to save a personal copy? Why are these questions even acceptable?

Even material that’s in the public domain can be effectively be taken out of it as soon as its been placed in a wrapper of what’s known as Digital Rights Management or DRM.  In this somewhat well-known example, Adobe once suggested that one could not read aloud its ebook version of Alice in Wonderland.

And so the population who could arguably benefit the most from the ascendance of ebooks – the visually impaired – are by and large restricted from using text to voice software lest that ability should cannibalize on the publisher’s market of audiobooks.

And there are other shortcomings with DRM. For one, many DRM systems require some form of authentication with a server online. If this server is down, you may not be able to get access to the game, movie, or ebook that you have already locally downloaded.  People who have tried to do the right thing and bought music from an online retailer such as MSN Music, Yahoo Music Store, or Puretracks (like me) can no longer access their licensed music because the servers that handled the DRM authentication have long been taken down.

One way to think of DRM is as a lock. But as digital locks go, DRM isn’t actually particularly difficult to break. But it’s particularly illegal to break DRM because of the DMCA or Digital Millennium Copyright Act makes it very illegal to even try to bypass DRM.

The terms of the agreement that are enshrined in DRM are ideally formed from a negotiated  agreement that balances the needs and desires of the publisher and the reader. However, as we have seen with the example of Harvard Business Review, publishers are largely in the position of power because they can always opt to cut libraries completely out of ebook circulation.

This webpage I found captures almost everything wrong with the state of ebooks and libraries today. And we are at this point – as I think we all know – because libraries have largely outsourced the management of ebooks to Overdrive…

… and the management of the DRM which is largely performed by Adobe, who does not have the same commitment to reader privacy as libraries.

It should give us pause that DRM is so effective at locking out third parties from a producer’s relationship with their customer, that companies such as John Deere are telling farmers that’s now illegal for them to repair their own farm equipment because the electronics of the tractor are now encased in the DRM and legally safeguarded by DMCA.

So now what? Are we screwed?

I know of librarians who refuse to buy ebooks with DRM for their own use but I only know of two libraries that have made the same pledge (other than the library where Barbara Fister works).

That being said, I know of many librarians who know how to bypass DRM but will not suggest that they can do to the public because of the illegality of it all. If you are interested in exploring a “what if” scenario of librarians transgressing DRM, you might be interested in this talk by Justin Unrau.

Now I’m sorry to starting this talk off on such a dark note but my purpose was to get the bad news out of the way. I also wanted to talk about DRM and the DMCA because I have a feeling that many of us in the profession aren’t aware that the capacity to make exceptions to the DMCA and break DRM is – in theory – in our wheelhouse.  Every three years, the Librarian of Congress is able to make exception to the DMCA. It is one of these exceptions that has made it possible to unlock a phone that is provided by a carrier.

This means that the possibility for libraries to unlock DRM for the purposes of accessibility and preservation *is* possible.

But this doesn’t mean libraries get to wait until that day that happens. Libraries are already embarking on a variety of strategies to thrive in a world where text is no longer a scarce resource

Now I suspect you are at ILEAD are here to discover and share your own strategies which just might include…

lending out objects that aren’t easily copyable such as musical instruments, scientific equipment, or household tools

building environments where objects can be made

exchanging co-working space for community mentoring or teaching

… hosting pop-ups or running events such as How to Festivals in your community…

… or just being there for community when your community needs you most.

But despite DRM and DMCA, still want you, my dear colleagues – to consider the role of copying in collection development.

And I want you to consider this because culture itself, depends on copying…


All mankind is of one author, and is one volume; when one man dies, one chapter is not torn out of the book, but translated into a better language; and every chapter must be so translated. . . .

—John Donne

The same might be said of all art. I realized this forcefully when one day I went looking for the John Donne passage quoted above. I know the lines, I confess, not from a college course but from the movie version of 84, Charing Cross Road with Anthony Hopkins and Anne Bancroft. I checked out 84, Charing Cross Road from the library in the hope of finding the Donne passage, but it wasn’t in the book. It’s alluded to in the play that was adapted from the book, but it isn’t reprinted. So I rented the movie again, and there was the passage, read in voice-over by Anthony Hopkins but without attribution. Unfortunately, the line was also abridged so that, when I finally turned to the Web, I found myself searching for the line “all mankind is of one volume” instead of “all mankind is of one author, and is one volume.”

My Internet search was initially no more successful than my library search. I had thought that summoning books from the vasty deep was a matter of a few keystrokes, but when I visited the website of the Yale library, I found that most of its books don’t yet exist as computer text. As a last-ditch effort I searched the seemingly more obscure phrase “every chapter must be so translated.” The passage I wanted finally came to me, as it turns out, not as part of a scholarly library collection but simply because someone who loves Donne had posted it on his homepage. The lines I sought were from Meditation 17 in Devotions upon Emergent Occasions, which happens to be the most famous thing Donne ever wrote, containing as it does the line “never send to know for whom the bell tolls; it tolls for thee.” My search had led me from a movie to a book to a play to a website and back to a book. Then again, those words may be as famous as they are only because Hemingway lifted them for his book title.

Literature has been in a plundered, fragmentary state for a long time.

If you have the chance, I would highly recommend you read the rest of Letham’s The Ecstasy of Influence and you have to read to all the way to the end — I’m not going to spoil it for you!

And it’s not only literature that has been a plundered state of some time. One can argue that one learns the art of many a particular creative field by the act of copying, transforming, and combining elements.

The wallpaper from the previous slide is from The EveryThing Is a Remix Project which makes the case that much of culture contain copied elements of works of the past, that are transformed and recombined and remixed. The first part is dedicated to music, the second on movies, the third on invention, the fourth on system failures of intellectual property

So please let me be clear. I am personally not anti-copyright, I do think of the artists who struggle to make a living while pursing a creative career, and I’m certainly not giving any personal license to plagiarize. But as this graphic from Steal Like An Artist suggests, it should not diminish art or artists to recognize that creative work does not come from thin air.

Anyways, if this topic interests you at all, I recommend these reads – although – I ain’t gonna lie – my favorite is Reality Hunger, which changed the way that I look at the novel.

When the personal copying of intellectual property is outlawed, only outlaws and artists can copy. For example, I’m pretty sure that Mick Jones of the Clash does not own the copyright of most of the 10,000 items in his collection and therefore, isn’t in a legal position to invite and allow users to make and take home scanned copies of the items in his collection for themselves. While Jones has named his collection “The Rock and Roll Public Library” it’s really more like a moving curated art exhibition.

Some years ago, C Magazine, which is a Canadian magazine dedicated to the visual arts, dedicated an entire issue on libraries. A former colleague of mine Adam Lauder, wrote an article within it called Performing the Library.

And that’s where about I heard of Jeff Khonsary’s The Copy Room. The project involved a room in Vancouver where there were photocopiers for people off the street could use for free on the condition that they leave a copy in the room. The copies build a reading room of material that reflected the community that use the copiers. It is sort of like the harbour of Alexandria, without the coercion.

So, let’s take a scroll from the Copy Room and the Library of Alexandria Playbook and consider how we could also build collections using copies despite DRM and DMCA

Let’s consider copying through the act of publishing. Or in other words, in digitization.

There are other libraries that have done this, but the first library that I have heard using this strategy is the Winnipeg Public Library who encouraged local bands to bring in their own music memorabilia such as posters for their gigs and gigs past and the library would scan the work, keep a copy and give the work and the high-end scan back to the user.

The Edmonton Public provides a similar collection and has recently offered to host 100 albums from local bands music for distribution to the library-card holding public. 

My own public library, the Windsor Public Library as one of the most successful self-publishing programs that I know of, with over 10,000 books published in 3 years using the Espresso Book Machine. One could only imagine if the library also ran a book distribution service for the books it published just as other self-publishers do such as Amazon and Lulu publishing.

That’s admittedly a large ask, when, as we know, most libraries don’t even host the ebooks that they already have. But there are exceptions – like the Evoke system of the Douglas County Libraries of Colorado who, as they say in their manifesto, they hope will become an ebook service without unnecessary constraints on access by the public.

I also think we should remember that are contexts in which we can only make copies before an item is published.

And that context is the University — where we should not forget that copying plays and has played a role in the scholarship since the middle ages.  In times of old, there were scribes that would make copies for students and faculty and I think we all of know of that little copy shop that’s not quite on campus, but really close and don’t blink an eye when someone comes in with a textbook.

 But the scenario I want to bring to mind is the present day. I’ve been in Academic Librarianship for over sixteen years and that’s a long to be in the midst of ever present Serial Crisis. And this crisis persists because faculty give their copyright away to the most prestigious journals who resell the scholarship back to libraries with obscene profit margins.


One of the strategies employed by institutions is to create a safe harbour for scholarship called an institutional repository, where faculty of an institution are encouraged – either compelled by good will or mandate to place a copy of their scholarship. In some ways, its not dissimilar to the idea of legal deposit that some National Libraries require of publishers in their country.

You know that this idea is a powerful one because until recently, the publishing behemoth Elsevier decreed that the only way it was going to allow its authors to deposit in their home institution’s repository if there was no mandate in place [image]. 

Speaking of legal deposit…

… the British Library has extended its traditional requirements of books to be placed in its collection and have extended its mandate to collect web sites of the nation.

Academic libraries are also beginning to start investigating and pursuing similar web archiving. But I don’t think mine is at the moment, (at least not that I know of) and that makes me worry a bit. I am reminded by experiences of the University of Virginia Libraries who had already some experience with web archiving when one of the largest crises to hit their campussuddenly erupted and they were there and ready to capture the history as it unfolded.

There are options if you think it’s important to preserve a website for the future even if your library doesn’t have the infrastructure in place. One option is the Save Page Now option that’s provided by the Internet Archive’s Wayback Machine.

It’s important to be aware that there is a very simple defense mechanism that can be used to prevent websites from being added to the Internet Archive and that’s a simple request is what is known as the robots.txt file – a file that designates whether the owner doesn’t want their page indexed in search engines.

Unfortunately, there are terrible side effects from such a simple mechanism. A site might be archived and accessible by the Internet Archive’s Wayback machine, but if the domain ever expires and is then bought out by someone else who then adds a robot text file, then the archive of same address will be lost forever.

Which all goes say this: relying on a single copy is a dangerous way to preserve our culture. That’s why there’s the strategy called LOCKSS – lots of copies keep stuff safe.

Social media has its own challenges in terms of archiving.

If you want to collect, for example, all the tweets related to the police shooting of Michael Brown of Ferguson, Missouri, you have to use Twitter’s API in order to maximize what you can capture and Twitter’s API only goes back the last nine days, so you need to act in the moment.

Alternatively, you can pay Twitter for the tweets after the fact. The present is free but the past has a cost.

Of course the conditions of how much you can access Twitter’s archives or the conditions of Twitter’s API is subject to change at their discretion. Recently, Twitter shut down the access of 31 accounts that captured the deleted tweets of politicians from around the world.

That’s not to say that the mass collection of tweets and other social media sites is without issues related to personal privacy and the right to be forgotten.

And for my last consideration of copying as collection development, I would like to suggest that libraries provide things for their readers to copy

The Prelinger Library and The Reanimation Library are both examples of carefully selected of books and ephemera from variety of sources, including weeded collections of discarded published material from libraries, to create a collection of visually interesting material for the inspiration for artists and writers.

While libraries have done a very good job experimenting with makerspaces and I think these libraries would be remiss not to also read these two books and to consider how their library can also be thought of as inspiration and raw material for the various creative arts.

This is an example from the blog Handmade Librarian from which the previous book, Bibliocraft came from. This activity shown involves making fancy bookmarks featuring ornamental stitching

That stitching was based on the braid alphabet found from the Etching and Engraving Picture file, a collection that the San Francisco library clearly marks as copyright-free images. Creating similar such collections is an endeavour is something I wish all libraries would undertake.

Penultimate section!

Please don’t be disappointed if a participant in your library’s National Novel Writing Group decides to write Fan Fiction.  Remember how people learn to be creative.

I like to think that there’s a growing understanding for those who create of ‘transformative works’ and a better appreciation for these writers who are both writing out of love and writing within a community of readers who can provide support and guidance.

When we can, we should consider placing work in the creative commons so others may transform and adapt our work for their own use. Creative Commons Licenses are incredibly important and powerful tools. Everything on my blog that’s my own work is designated as CC-BY.

But let’s not forgot the larger picture.

Copying is an act of love. Copying is how we as readers and writers demonstrate such love. As Cory Doctorow and many others have also noted, the greatest threat to artists is not piracy but obscurity.

Last set of slides!

Remember way back when I showed you this circle of life of printed material?

( BTW, as these slides are my own work they are available for you to reuse and remix as you see fit.)

Then DRM came along …

But now we know that this is not the whole picture. Libraries can bring their communities to the world by facilitating works that are in the creative commons and/or open access.

The title of my talk, as you probably have figured out, was a riff on probably the only thing from our collective library education that we can collectively all remember. The first of Rangathan’s laws is that books are for use.

A couple of years ago, librarian and author, Barbara Fister re-wrote the 5 laws in the most the cynical language of our days.

But then she re-wrote the same laws this way. I can’t possibly improve on how we she captured many of the ideas that I was hoping to share with you today and so with that I would like to say…

Thank you.

Library of Cards


On Thursday, September 10th, I had the honor and the pleasure to present at Access 2015. I’ve been to many Access Conferences over the years and each one has been a joyful experience. Thank you so much to those organizing Access YYZ for all of us.

Have you ever listened to the podcast 99% Invisible?

99% Invisible is a weekly podcast dedicated to design and architecture and the 99% percent of the invisible activity that shapes our experiences in the world.

They’ve done episodes on the design of city flags, barbed wire, lawn enforcement, baseball mascot costumes, and it’s already influenced my life in pretty odd ways.

If I was able to pitch a library-related design to host, Roman Mars for an episode, I would suggest the history of the humble 3 x 5 index card.

That being said, for this presentation, I am not going to present to you the history of the humble 3 x 5 index card.

That’s what this book is for, Markus Krajewski’s 2011 Paper Machines published by MIT Press.

Now, before I had read his book, I had believed that the index card was invented by Melvil Dewey and commercialized by his company, The Library Bureau. But Krajewski makes the case that the origin of the index card should be considered to go as far back as 1548 with Konrad Gessner who described a new method of processing data: to cut up a sheet of handwritten notes into slips of paper, with one fact or topic per slip, and arrange as desired.

According to Krajewski, when this technique goes from provisional to permanent – when the slips that describe the contents of a library are fixed in a book, an unintended and yet consequential turn takes place: it gives rise to the first card catalog in library history in Vienna around 1780.

Most histories of the card catalog begin just slightly later in time — in 1789 to be precise — during the French Revolution. The situation at hand was that the French revolutionary government had just claimed ownership of all Church property, including its substantial library holdings. It order to better understand what it now owned, the French revolutionaries started to inventory all of these newly acquired books. The instructions for how this inventory would conducted is known as the French Cataloging Code of 1791.

The code instructed that first, all the books were to be numbered. Next, the number of each book as well as the bibliographic information of each work were to be written on the back of two playing cards – and this was possible because at that time the backs of playing cards were blank. The two sets of cards are then put into alphabetical order and fastened together. One set of cards were to be sent to Paris, while a copy remains in each library.

On the screen behind me, you can see two records for the same book.

Again, my talk isn’t about bibliographic history, but I want to return back to the 16th century to Gessner for some important context. The reason why Gessner was making all those slips in the first place was to construct this, the Bibliotheca Universalis which consists of a bibliography of around 3,000 authors in alphabetical order, describing over 10,000 texts in terms of content and form, and offering textual excerpts. As such, Gessner is considered the first modern bibliographer.

And you can find his work on the Internet Archive.

Gessner’s Biblioteca Universalis wasn’t just a bibliography. According to Krajewski, the book provides instructions to scholars how to properly organize their studies through the keeping excerpted material in useful order. Gessner was describing an already established practice. Scholars kept slips or cards in boxes, and when they had the need to write or give a lecture or sermon, they would take the cards that fit their theme, and would arrange those thoughts and would temporarily fix them in order using devices such as the one pictured. This hybrid book has guiding threads that stretch over the page so that two rows of paper slips can be inserted and supported by paper rails.

Until the Romantics came around and made everyone feel embarrassed about taking inspiration from other people, it was common for scholars to use Scholar’s Boxes. Gottfried Leibniz actively used what was known as an excerpt cabinet to store and organize quotations and references.

Leibniz’s method of the scholar’s box combines a classification system with a permanent storage facility, the cabinet. So in a way this is similar to the use of Zotero or other citation management systems, but instead uses loose sheets of paper on hooks. The strips are hung on poles or placed into hybrid books

And that’s the reason why I wanted to start my talk with a brief history lesson. To remind us that there is a common ancestor to the library catalog and the scholar’s bibliography, and that is the index card.

So as we’ve learned, from as far back as Gessner’s 16th Century, writers have been using cards and slips of paper to rearrange ideas and quotations into texts, citations into bibliographies, and bibliographic descriptions into card catalogues.

You can still buy index cards and card boxes at my local campus bookstore. That’s because there are still authors today, who still use index cards to piece together and re-sort parts of their paper or novel, or they use and rearrange digital cards inside of such writing software tools such as Scrviner to generate new works.

Now, I don’t write this way myself. But I do use Zotero as one of the tools that I use to keep track of citations, book marks,  saved quotations, and excerpts of text that I have used or might use in my own work as a writer and academic librarian.

Zotero acts as an extension of your web reading experience and it operates best as an add-on to the Firefox browser. If you use Zotero, you can usually easily capture citations that one finds on a page either because someone who supports Zotero has already developed a custom text scraper (called a translator) for the database or website that you are looking at or that citation has been marked up with text that’s invisible to the human eye but can be found in the span HTML tags that surround the citation using a microformat called COinS.

Zotero also allows scholars to backup their citations to their server and in doing so, share their citations by making one’s library public on  Alternatively, scholars can share  bibliographies on theirown website using the Zotero API which is so simple and powerful you can embed a bibliography styled with APA with a single line of code. 

One of my favourite features of Zotero is not widely known. Zotero out of the box allows the scholar to generate ‘cards’ which are called ‘reports’ from your bibliography.  When I have a stack of books that I need to locate in my library, I sometimes find it’s easier for me to select and generate a report of cards from my Zotero collection rather than to search, select and print the items using my library’s expensive ILS system.

There is a terrible irony to this situation. As I learned from the Library Journal column of Dorothea Salo, the design problem given to Henriette Avram’s, the inventor of the MARC records was to have “computers print catalog cards.”

As Salo says in her piece, “Avram was not asked to design a computer-optimized data structure for information about library materials, so, naturally enough, that is not what MARC is at heart. Avram was asked solely to make computers print a record intended purely for human consumption according to the best card-construction practices of the 1960s.”

Let’s recall that one of the reasons why Zotero is able to import citations easily is because of the invisible text of COinS and translators. 

The metadata that comes into Zotero is captured as strings of text. Which is great – a name is now tagged with the code AU to designate that the text should go in the Author field. But this functionality is not enough if you want to produce linked data.

Dan Scott has kindly shared the code to RIS2WEB that allows you to run it on an export of a bibliography from Zotero in doing so create and serve a citation database that also generates of linked data using Schema. Afterwards, you can add available URIs.

You can see the results of this work at

When I showed this to a co-worker of mine, she couldn’t understand why I was so impressed by this. I had to hit Control-U on a citation to show her that this citation database contained identifiers such as from VIAF: The Virtual International Authority File. I explained to her that by using these numeric identifiers computers will be able not only find matching text in the author field, they will be better able to find that one particular author.

So can we call Zotero a Scholar’s Box or Paper Machine for the digital age?

I think we can, but that being said, I think we need to recognize that the citations that we have are still stuck in a box, in still so many ways.

We can’t grab citations from library database and drop them into a word processor without using bibliographic manager like Zotero as an intermediary to the capture structured data that might be useful to my computer when I need format a bibliography. Likewise, I can’t easy grab linked data from sites like the Labour Studies bibliography page.

And we still really don’t share citations in emails or social media.

Instead, we share the URL web addresses that point to the publisher or third party server that host said paper.  Or we share PDFs that should contain all the elements needed to construct a citation and yet somehow still requires the manual re-keying and control c-ing and v-ing of data into fields when we want to do such necessary things as add an article to an Institutional Repository.

Common Web tools and techniques cannot easily manipulate library resources. While photo sharing, link logging, and Web logging sites make it easy to use and reuse content, barriers still exist that limit the reuse of library resources within new Web services. To support the reuse of library information in Web 2.0-style services, we need to allow many types of applications to connect with our information resources more easily. One such connection is a universal method to copy any resource of interest. Because the copy-and-paste paradigm resonates with both users and Web developers, it makes sense that users should be able to copy items they see online and paste them into desktop applications or other Web applications. Recent developments proposed in weblogs and discussed at technical conferences suggest exactly this: extending the ‘clipboard’ copy-and-paste paradigm onto the Web. To fit this new, extended paradigm, we need to provide a uniform, simple method for copying rich digital objects out of any Web application.

Now, those aren’t my words. That’s from this paper Introducing unAPI written by Daniel Chudnov, Peter Binkley, Jeremy Frumkin, Michael J. Giarlo, Mike Rylander, Ross Singer and Ed Summers.

This paper, I should stress, was written in 2006.

Within the paper, the authors outline the many reasons why cutting and pasting data is so infuriatingly difficult in our sphere of tools and data.

But what if there was another paradigm we could try?

In order to see how we might be possibly break out of the scholar’s box, I’m going to talk about a very speculative possibility. And in order to set us up for this possibility, I first need to talk about how cards are already used on the web and on our tablets and smart phones.

If you look around the most popular websites and pay particular attention to the design patterns used, you will quickly notice that many of the sites that we visit every day (Twitter, Facebook, Trello, Instagram, Pinterest) they all use cards as a user interface design pattern.

The use of cards as a design pattern rose up along with the use of mobile devices largely because a single card fits nicely on a mobile screen…

…while on larger surfaces, such as tablets and desktops, cards can be arranged in a vertical feed, like in Facebook or Twitter, or arranged as a board like Pinterest, or like a like a stack, such as Google Now or Trello.

This slide is from a slidedeck of designer and technologist, Chris Tse. The rest of this section is largely an exploration of Chris’ work and ideas about cards.

Case in point, Chris Tse states, the most important quality of ‘cards’ is that of movement. But by movement, he isn’t referring to the design’s apparent affordances that makes swiping or scrolling intuitive.

The movement of cards that’s really important is how they feed into content creation and content sharing and how cards feed into discussions and workflow.

(How cards fit into kaban boards and shared workflow software like Trello, is a whooooole other presentation)

Social media is collectively made up of individuals sharing objects – objects of text, of photos, of video, of slideshows – and they share these objects with friends and to a larger public. Each of these objects are framed – by and large – within cards.

It’s important to realize that the cards on the web are fundamentally more than just a just a design hack.  If you are familiar with Twitter, you may have started to see cards that don’t just feature 140 characters – you see playable music (such as from Soundcloud), Google Slideshows that you can read through without leaving Twitter, and you can even download 3rd party apps from Twitter advertising cards. When the business press say that Twitter is a platform, it’s not just marketing hype.

As Chris Tse says, cards are more than just glorified widgets.  “When done right”, he says, “a card looks like responsive web content, works like a focused mobile app, and feels like a saved file that you can share and reuse”. As “cards” become more interactive, he believe they will go from being just concentrated bits of content and turn into mini-apps that can be embedded, can capture and manipulate data, or even process transactions.
But why isn’t this more obvious to people? I think the reason why is that cards don’t really feel this way is that most cards can only move within their own self-contained apps or websites.

For example, Google Now cards work with your Google applications – such as your calendar – but don’t interact with anything about your life in Facebook.

That being said, Google and Apple are working on ways into integrate more services into their services. In Google Now, I’m regularly offered news story based on my recent searches as well as stories that are popular to other readers who read similar stories using the Feedly RSS reader.

And this is a problem because it’s Google who is deciding whose services I can choose from for such card notifications.

The apps on your smart phone live in a walled garden where things are more beautiful and more cultivated, but it is a place that is cut off from the open web.

The fall of the open web and the rise of the walled garden is not a trivial problem. We should not forget that if you want your app to be available on an iPhone it must be in the Apple Store and the content of your app will be subject to the Apple Review process and Apple will take a 30% cut of what your app sells for.  Content within apps curtail various forms of free and freedoms.

To bridge this split of the open web and the walled app garden, Chris Tse founded The mission of Cardstack is to “To build a card ecosystem based on open web technologies and open source ethos that fights back against lock-in.”

CardStack wraps single-page JavaScript applications as a reusable ‘card’ that can be embedded in native apps and other web apps. According to Chris, HTML5 cards will be able to move between apps, between devices, between users and between services.

CardStack itself is comprised of other JavaScript libraries, most notably Conductor.js and Oasis.js and I cannot speak anything more to this other than to repeat the claim that these combined libraries create a solution that is more secure than the embeded content than the iFrames of widgets past.

But notice the ‘Coming Soon’ statement in the top left hand corner? CardStack is still in beta with SDKs still being developed for iOS and Android systems.

Despite this, when I first stumbled upon Chris Tse’s presentations about Cardstack, I was really excited by his vision. But the crushing reality of the situation settled mere moments later.

Yes, a new system of cards to allow movement between siloed systems that could work in mobile or desktop environments, that would be wonderful – but wasn’t it all too late?

And what does it mean if we all collectively decide, that it is all just too late?

One of the challenges of promoting a new sort of container is that you can really show it off until you have some content to contain. You need a proof of concept.

When I checked in later to see what Chris was doing as I was drafting this presentation, I learned that he was now the CTO of a new platform – that he confirmed for me is built using Cardstack as a component.

This new platform has its origin from the art world’s Rhizome’s Seven on Seven conference. The Seven on Seven conference pairs seven visual artists with seven technologists for a 24 hour hackjam and in 2014, artist Kevin McCoy was paired up with technologist Anil Dash.

McCoy and Dash were both interested in improving the situation of digital artists whose work can be easily be copied. With copying, the provenance of a digital work can be lost and as well as the understanding of what was original and what has been modified.

They talked and worked on a proof of concept of a new service that would allow visual artists to register ownership of their digital work and transfer that ownership using blockchain technology.

Over a year later, this idea has grown into a real platform that is private beta and is set to be released to the public this month.

I think two things are particularly awesome about this project. First the platform also allows for artists to decide for themselves whether the license for their work in the creative commons or requires a royalty and whether derivatives of their work is allowed.

The other thing I love about this project is its name. If you look at the top left hand of this screen you will find that the name of the platform is spelled m-o n-e-g-r-a-p-h. The platform is called MONOGRAPH.

And as we now know, you build a monegraph with cards

We need to remember that the library and the bibliography are connected by the card. There is the text and then there is the record of the text. The texts that we make available in our libraries are themselves, frequently pieced together by ideas and quotations, captured, sorted, and from smaller pieces, sometimes on cards.

As we continue to invest in crucial new endeavors in the digital realm, I think it’s essential that librarians,  find new ways to surface our resources and allow them to be shared socially, and to find the means by which scholars can save and sorting and re-use these resources that they find from our collections.

We are part of a generative process. Cards of single ideas that are arranged and stacked build theses, which in turn, build papers, books which, in turn, form bibliographies which fill libraries.

I would like libraries to find a way to back to Gessner’s Bibliotheca Universalis, a place where the library and the scholar were both connected.

After all…

Advice from a Badass: How to make users awesome

Previously, whenever I have spoken or written about user experience and the web, I have recommended only one book: Don’t Make Me Think by Steve Krug.

Whenever I did so, I did so with a caveat: one of the largest drawbacks of Don’t Make Me Think is captured in the title itself : it is an endorsement of web design that strives to remove all cognitive friction from the process of navigating information. This philosophy serves business who are trying to sell products with a website but doesn’t sit well with who are trying to support teaching and learning.

Today I would like to announce that I hereby retire this UX book recommendation because I have found something better. Something several orders of magnitude better.

I would like to push into your hands instead a copy of Kathy Sierra’s Badass: Making users awesome. In this work, Kathy has distilled the research on learning, expertise and the human behaviors that make both of these things possible.

You can use the lessons in Badass towards web design. Like Don’t Make Me Think, Badass also recognizes there are times when cognitive resources need to be preserved, but unlike the Don’t Make Me Think, Badass Kathy Sierra advises when and where these moments in specific points should be placed in the larger context of the learner’s journey towards expertise.

You see, Badass: Making Users Awesome isn’t about making websites. It’s about making an expert Badass.

In her book, Sierra establishes why helping users become awesome can directly lead to the success of a product or service and and then builds a model with the reader to achieve this. I think it’s an exceptional book that wisely advises how to address the emotional and behavioural setbacks to learning new things without having to resort to bribery or gamification, neither of which work after the novelty wears off. The language of the book is informal but the research behind the words is formidable.

One topic that Badass covers that personally resonated was the section on the Performance Progress Path Map as a key to motivation and progress. I know that there is resistance in some quarters to the articulation of of learning outcomes by those who suspect that the exercise is a gateway to the implementation of institutional standards that will eliminate teacher autonomy, or eliminate teachers altogether. But these fears shouldn’t come into play as it doesn’t apply in this context and should not inhibit individuals from sharing their personal learning paths.

The reason why this topic hit so close to home was because I found learning to program particularly perilous because of the various ‘missing chapters’ of learning computing (a phrase I picked up from Selena Marie’s not unrelated Code4Lib 2015 Keynote, What Beginners Teach Us – you can find part of the script here from a related talk).

I think it’s particularly telling that some months ago, friends were circulating this picture with the caption: This is what learning to program feels like.

There’s a real need with the FOSS moment to invest into more projects like the Drupal Ladder project, which seeks to specifically articulate how a person can start from being a beginner to become a core contributor.

Furthermore, I think there’s a real opportunity for libraries to be involved in sharing learning strategies, especially public libraries. I think the Hamilton Public Library is really on to something with their upcoming ‘Learn Something’ Festival.

Check out @HamiltonLibrary‘s “How-to Festival”, a series of workshops on how to do stuff!
— Ad/Lib (@adlib_info) April 23, 2015

Let’s not forget,

The real value of libraries is not the hardware. It has never been the hardware. Your members don’t come to the library to find books, or magazines, journals, films or musical recordings. They come to be informed, inspired, horrified, enchanted or amused. They come to hide from reality or understand its true nature. They come to find solace or excitement, companionship or solitude. They come for the software.

While the umbrella concept of User Experience has somewhat permeated into librarianship, I would argue that it has not traveled deep enough and have not made the inroads into the profession that it could. I’ve been thinking why and I’ve come up with a couple of theories why this is the case.

One theory is that many academic librarians who are involved in teaching have a strong aversion to ‘teaching the tool’. In fact, I’ve heard that the difference between ‘bibliographic instruction’ and ‘information literacy’ is that the former deals with the mechanics of searching, while ‘information literacy’ addresses higher-level concepts. While I am sympathetic to this stance (librarians are not product trainers), I also resist the ‘don’t teach the technology’ mindset. The library is a technology. We can, and we have, taught higher level concepts through our tools.

As Sierra states, “Tools matter”.

But she wisely goes on to state:

“But being a master of the tool is rarely our user’s ultimate goal. Most tools (products, services) enable and support the user’s true — and more motivating – goal.

Nobody wants to be a tripod master. We went to use tripods to make amazing videos.”

The largest challenge to the adoption of the lessons of Badass into the vernacular of librarianship is that Badass is focused squarely on practice.

“Experts are not what they know but what they do. Repeatedly.”

A statement like the above may be quickly dismissed by those in academia as the idea of practice sounds too much like the idea of tool use. (If it makes you feel better, dear colleagues, consider this restatement in the book: “Experts make superior choices (And they do it more reliably than experienced non-experts).” Each discipline has a practice associated with it.

I have previously made the case that the librarians regular activity of searching for information of others at the reference desk was the practice where our expertise was once made (the technical services equivalent would be the cataloguing of materials). 

But as our reference desk stats have plummeted (and our catalogue records copied from elsewhere), I still think the profession need to ask ourselves, where does the our expertise come from? Many of us don’t have a good answer for this, which is why I think so many librarians – academic librarians in particular – are frequently and viciously attacking the current state of library school and its curriculum, demanding rigor. To that I say, take your professional anxieties out on something else. A good educational foundation is ideal, but professional expertise is built through practice.

What the new practice of librarianship is beyond the reference desk is still evolving. It appears that digital publishing and digitization is becoming part of this new practice. Guidance with data management and data visualizations appears to be part of our profession now too. For myself, I’m currently trying to level up my skills in citation management and its integration with the research and writing process.

That’s because there has been more fundamental shift in my thinking about academic librarianship as of late that Kathy’s book has only encouraged. I would like to make the case that the most important library to our users isn’t the one that they are sitting in, but the one on their laptop. Their collection of notes, papers, images and research materials is really the only library that really matters to them. The institutional library (that they are likely only temporarily affiliated with) may feed into this library, but its contents cannot be trusted to be there for them always.

For an example, consider this: two weeks ago, I helped a faculty member with an Endnote formatting question. As I looked over her shoulder, I saw that her Endnote library on her laptop contained hundreds and hundreds of citations that had been collected and organized over the years and how this collection was completely integrated with her writing process. This was her library.

And despite not having worked in Endnote for years, I was able to help her with formatting question so she could submit her paper to a journal with its particularly creative and personal citation style. It seems that I have developed some expertise by working with a variety of citation managers over the years.

I wouldn’t call myself a Badass. Not yet. But I’m working on it.

And I’m working on helping others finding and becoming their own Badass self.

It’s been many years now, and so it bears repeating.

My professional mission as a librarian is this: Help people build their own libraries. Because this is the business we’ve chosen

The update to the setup

In my last post, I described my current computer set up. I did so to encourage a mindfulness in my own practice (I am not ashamed of writing the previous sentence – I really do mean it). Forcing myself to inventory the systems that I use, made two things readily apparent to me. First, it is abundantly clear that not only am I profoundly dependent on Google products such as Google Drive, almost all of the access to my online world is tied together by my Gmail account. I aspire to, one day, be one among the proud and the few who are willing to use alternatives such as Owncloud and Fastmail just to establish a little more independence.

But before even considering this move, I first needed to address the second glaring problem that emerged from this self-reflection of my setup: I desperately needed a backup strategy. Massive loss was just a hard drive failure or malicious hack away.

As I write this, my old Windows XP computer is sending years worth of mp3s, documents and digital photos to my new WD Book which I bought on recommendation from Wirecutter. When that’s done, I’m going to copy over my back ups of my Google Drive contents, Gmail, Calendar, Blogger, and Photos that I generated earlier this week using Google Takeout.

I know myself well enough that I cannot rely on making regular manual updates to an external hard drive. So I have also invested in a family membership to CrashPlan. It took a loooong time for the documents of our family computers to be uploaded to the CrashPlan central server but now the service works unobtrusively in the background as new material accumulates. If you go this route of cloud-service backups, be aware that its likely that you are going to exceed your monthly data transfer limit for your ISP. Hopefully your ISP is as understanding as mine who waved the additional costs as this was a ‘first offense’ (Thank you Cogeco!)

My next step? I’m going to re-join the Archiveteam.

Because history is our future.

The Setup

For this post, I’m going to pretend that the editors of the blog, The Setup (“a collection of nerdy interviews asking people from all walks of life what they use to get the job done”) asked me for a contribution. But in reality, I’m just following Bill Denton’s lead.

It feels a little self-indulgent to write about one’s technology purchases so before I describe my set up, let me explain why I’m sharing this information.

Some time back, in preparation for a session I was giving on Zotero for my university’s annual  technology conference, I realized that before going into the reasons how to use Zotero, I had to address the reasons why. I recognized that I was asking students and faculty who were likely already time-strapped and overburdened, to abandon long-standing practices that were already successfully working for them if they were going to switch to Zotero for their research work.

Before my presentation, I asked on Twitter when and why faculty would change their research practices.  Most of the answers were on the cynical side but there were some that gave me some room to maneuver, namely this one: “when I start a new project.”  And there’s a certain logic to this approach. If you were starting graduate school and know that you have to prepare for comps and generate a thesis at the end of the process, wouldn’t you want to conscientiously design your workflow at the start to capture what you learn in such a way that it’s searchable and reusable?

My own sabbatical is over and oddly enough, it is now at the end of my sabbatical in which I feel the most like I’m starting all over again in my professional work. So I’m using that New Project feeling to fuel some self-reflection in my own research process, bring some mindfulness to my online habits, and deliberate design into My Setup.

There’s another reason why I’m thinking about the deliberate design of research practice. As libraries start venturing into the space of research service consultation, I believe that librarians need to follow best practices for ourselves if we hope to develop expertise in this area.

As well, I think we need to more conscious of how and when our practices are not in line with our values. It’s simply not possible to live completely without hypocrisy in this complicated world but that doesn’t mean we can’t strive for praxis. It’s difficult for me to take seriously accusations that hackerspaces are neoliberal when it’s being stated by a person cradling a  Macbook or iPhone. That being said, I greatly rely on products from Microsoft, Amazon, and Google so I’m in no position to cast stones.

I just want to care about the infrastructures we’re building….

And with that, here’s my setup!


There are three computers that I spend my time on: the family computer in the kitchen (a Dell desktop running Windows 7), my work computer (another Dell desktop running Windows 7), and my Thinkpad X1 Carbon laptop which I got earlier this year.  Grub turned my laptop into a dual boot machine that I can switch between Ubuntu and Windows 7. I feel I need a Windows environment so my kids can play Minecraft and so I can run any ESRI products, if need be.

I have a Nexus 4 Android phone made by LG and a Kindle DX as my ebook reader. I don’t own a tablet or an mp3 player.

Worldbackup Day is March 31st. I need to get myself an external drive for backups (Todo1).


After getting my laptop, the first thing I did was investigated password managers to find which one would work best for me. I ended up choosing LastPass and I felt the benefits immediately. Using a password manager has saved me so much pain and aggravation and now my passwords are now (almost) all unique. Next, I need to set up two factor authentication for the services that I haven’t gotten around to yet (Todo2).  

With work being done on three computers, it’s not surprising that I have a tendency to work online. My browser of choice is Mozilla but I will flip to Chrome from time to time. I use the sync functionality on both so my bookmarks are the automatically updated and the same across devices. I use SublimeText for my text editor for code, GIMP as my graphics editor, and QGIS for my geospatial needs.

This draft, along with much of my other writing and presentations are on Google Drive. I spend much of my time in Gmail and Google Calendar. While years ago, I downloaded all my email using Mozilla Thunderbird, I have not set up a regular backup strategy for these documents (Todo3). I’ve toyed with using Dropbox to back up Drive but think I’m better with an external drive. I have a Dropbox account because people occasionally share documents with me through it but at the moment, I only use it to backup my kids Minecraft games.

From 2007 to 2013, I used delicious to capture and share the things I read online. Then delicious tried to be the new Pinterest and made itself unusable (although it has since reverted back to close to its original form) and so I switched to Evernote (somewhat reluctantly because I missed the public aspect of sharing bookmarks).   I’ve grown to be quite dependent on Evernote to save my outboard brain. I use IFTTT to post the links from my Twitter faves to delicious which are then imported automatically into Evernote.  I also use IFTTT to automatically backup my Tumblr posts to Evernote, my Foursquare check-ins saved to Evernote (and Google Calendar) and my Feedly saved posts to Evernote. Have I established a system to back up my Evernote notes on a regular basis? No, no I have not (Todo4).

The overarching idea that I have come up with is that the things I write are backed up on my Google Drive account and the library of things that I have read or saved to future reading (ha!) are saved on Evernote.  To this end, I use IFTTT to save my Tweets to a Google Spreadsheet and my Blogger and WordPress posts are automatically saved to Google Drive (still in a work in progress. Todo 5). My ISP is Dreamhost but I am tempted to jump ship to Digital Ocean.

My goal is to have at least one backup for the things I’ve created. So I use IFTTT to save my Instagram posts to Flickr. My Flickr posts are just a small subset of all the photos that are automatically captured and saved on Google Photos.  No, I have not backed up these photos  (Todo 6) but I have, since 2005, printed the best of my photos on an annual basis into beautiful softcover books using QOOP and then later, through Blurb.  My Facebook photos and status updates from 2006 to 2013 have been printed in a lovely hardcover book using MySocialBook.  One day I would like to print a book of the best of my blogged writings using Blurb, if just as a personal artifact.

Speaking of books, because I’m one of the proud and the few to own a KindleDX, I use it to read PDFs and most of my non-fiction reading. When I stumble upon a longread on the web, I use Readability’s Send to Kindle function so I can read it later without eyestrain. I’m inclined to buy the books that I used in my writing and research as Kindle ebooks because I can easily attach highlighted passages from these books to my Zotero account. My ebooks are backed up in my calibre library. I also use Goodreads to keep track of my reading because I love knowing what my friends are into.

I subscribe to Rdio and for those times that I actually spend money on owning music, I try to use Bandcamp. I’m an avid listener of podcasts and for this purpose use BeyondPod. Our Sonos system allows us to play music from all these services, as well as TuneIn, in the living room.  The music that I used to listen to on CD is now sitting on an unused computer running Windows XP and I know if I don’t get my act together and transfer those files to an external drive soon those files will be gone for good.. if they haven’t already become inaccessible (*gulp*) (Todo 8).

For my “Todo list” I use Google Keep, which also captures my stray thoughts when I’m away from paper or my computer. Google Keep has an awesome feature that will trigger reminders based on your location.

So that’s My Setup. Let me know if you have any suggestions or can see some weaknesses in my workflow. Also, I’d love to learn from your Setup.

And please please please call me out if I don’t have a sequel to this post called The Backup by the time of next year’s World Backup Day.

Teach for America. Code for America. Librarianing for America

On Friday the 13th, I gave the morning keynote at the Online Northwest Conference in Corvallis, OR. Thanks so much to the organization for inviting me.

Last October, I was driving home from a hackathon when I heard something extraordinary on the radio. Now, as human beings, we tend to get over-excited by coincidence – it’s a particular cognitive bias called the frequency illusion – you buy a gold station wagon and suddenly you see gold station wagons everywhere (yes, that’s my gold station wagon behind me). But that being said, I  still contend that there was something special about what I heard and when I heard it. Because you don’t hear people talking about Open Data on the radio very often.

So here’s the brief backstory.  The local technology incubator in partnership with the local hackerspace that I’m involved with was co-hosting a week long hackathon to celebrate Science and Technology Week.  I was just returning from its kick-off event where I had just given a presentation on the role of licensing in Open Data.  This particular hackathon was a judged event, with the three top prizes being a smart watch, admission to an app commercialization seminar, and an exclusive dinner with an expert in the commercialization of apps — which was kind of odd since the data sets that were provided for the event were sets like pollution monitoring data from the Detroit River, but hey – that’s the part of the challenge of making commercial apps out of open data.

While it has been said that we are now living in the age of Big Data, only the smallest subset of that data is explicitly licensed in such a way that we the citizen can have access and can make use of it without having to ask permission or buy a license.  I’m the lead of Open Data Windsor Essex and much of my role involves explaining what Open Data is because it’s not largely understood. Because I’m talking to my fellow librarians, I’m going to give you a very abbreviated version of my standard Open Data explainer:

One of the most common definitions of Open Data comes from the Open Knowledge Foundation: Open data is data that can be freely used, reused and redistributed by anyone – subject only, at most, to the requirement to attribute and sharealike.

So, using this definition, a creative commons license of CC-BY : which means that the work has been designated in the creative commons as free to use without requiring permission as long as there is attribution is given is considered Open Data.  But CC-NC which stands for Creative Commons Non-Commercial is not considered Open Data because the domain of use has been restricted.

We in librarianship talk a lot about open source, and open access, but even we don’t talk about open data very much. So that’s why I was so surprised when there was a conversation coming from my car radio on the importance of Open Data.  Granted, I was listening to campus Radio – but still, I think I reserve the right to be impressed by how the stars seemed to have aligned just for me.

The show I was listening to was Paul Chislett’s The Shake Up on CJAM and he was interviewing Paula Z. Segal, the lead executive of a Brooklyn-based organization called 596 Acres. Her organization builds online tools that makes use of Open Data to allow neighbours to find the vacant public land hidden in plain sight in the city as the first step in the process of turning them into shared resources, such as community gardens.  Perhaps not surprising to you now, but in 2011 there was 596 acres of such empty lots in Brooklyn alone.

Segal was telling the radio host and the listening audience that many communities make data – such as data that describes what land is designated for what purpose – open and accessible to its residents. However, most citizens don’t know that the data exists because the data is contained in obscure portals, and if even if they did find the data, they generally do not understand how to handle the data, how to make sense of it and how to make it meaningful to their experiences.

Now when I heard that and whenever I hear similar complaints that the promise of Open Data has failed because it tends to add power to already powerful, I keep thinking the same thing – this is a job for librarians.

It reminds me of this quote from open government advocate, David Eaves:

We didn’t build libraries for a literate citizenry. We built libraries to help citizens become literate. Today we build open data portals not because we have public policy literate citizens, we build them so that citizens may become literate in public policy.

This brings us to the theme of this morning’s talk- which is not Open Data – although I will express today’s theme through it largely because I’m back from a year’s sabbatical immersed in the topic and it’s still difficult for me to not talk about it. No, today I would like to make a case for a creating a nationwide program to put more librarians into more communities and into more community organizations. I have to warn you that I’m not going to give you any particulars about what shape or scope of what such a program could be; I’m just going to try to make a case for such an endeavor. I haven’t even thought of a good name for it. The best I can come up with is Librarianing for America. On that note, I would like to give a shout-out to Chris Bourg for – if not coining the word librarianing – for at least, bringing to my attention.

And I very much hope that perchance the stars will align again and this theme will complement the work that I am very much looking forward to hearing today at Online Northwest : about digitally inclusive communities, about designing and publishing, about being embedded, about sensemaking through visualization, about enhancing access and being committed to outreach.

Before I continue I feel I should disclose that I’m not actually American.  I grew up across the river from Port Huron, Michigan and I now live across the river from Detroit, Michigan.  I literally can see Detroit from my house.

And Detroit is the setting for my next story.

A quick aside first – my research interest in open data has been largely focused on geospatial data as well as the new software options and platforms that are making web mapping much more accessible and viable for individuals and community groups  when compared to the complex geographic information systems commonly known as GIS –  that institutions such as city governments and academic libraries tend to exclusively support.

I mention this as a means to explain why I decided to crash the inaugural meeting of Maptime Detroit that happened in early November last year.

Maptime is time designated to making maps. It is the result of kind volunteers who find a space, designate a time, and extend an open invitation to anyone who is interested to drop in and learn about making maps. It started in San Francisco a couple of years ago and now there are over 40 Maptime Chapters around the world.

Now, when I went to the first Maptime Detroit event, there wasn’t actually any time given to make maps. For this inaugural meeting, instead there was a set of speakers who were already using mapping in their work.

Not very many people know that Detroit has an amazing history of citizen mapping initiatives  – the map behind me is from The Detroit Geographical Expedition from their work Field Notes Three from 1970.  I think you could make a case that another kind of community mapping outreach work is starting to emerge again through the many community initiatives that are supported by mapping that is happening in Detroit today. 

Many of the organizations who are doing community mapping work were presenting at Maptime Detroit including Justin Wedes, an organizer from the Detroit Water Brigade.

As you might already know, the city of Detroit declared bankruptcy in 2013 with debts somewhere between $18 to $20 billion dollars.  The city is collapsing upon itself at a scale that’s very difficult to wrap one’s mind around. 

The Detroit Water and Sewage Department is currently conducting mass water shut offs in the city which will affect over 120,000 account holders over an 18 month period at a rate of 3,000 per week. This will account for over 40% of customers who are using the Detroit Water system. As 70,000 of those accounts are residential accounts, it is thought that 200,000-300,000 people could be directly affected.

The Detroit Water Brigade coordinates volunteers efforts in the distribution of bottled water to affected neighbours as well as acts an advocate for the UN recognized human right to water on behalf of Detroiters.

But at Maptime Detroit, Justin Wedes didn’t begin his talk with his work in Detroit. Instead he began his presentation by speaking about of his experiences with Occupy Sandy.  In October of 2012, while New York’s FEMA offices were closed due to bad weather, veterans from the Occupy Wall Street community came forward and used their organization skills to mobilize ground support for those who needed it most. At first, Occupy Sandy was using free online services such Google Spreadsheets and Amazon’s Web Registry to collect and redistribute donations but by the end of their work, they had started using the exact same software that the city of New York uses for dispatching resources during disasters.

Wedes described the work of the Detroit Water Brigade and as he did so, he also told us how very different his experiences were in Detroit as compared to his ones in New York after Superstorm Sandy. After Sandy hit, he told us, those New Yorkers who could help their more badly damaged neighbours did so with great enthusiasm and that help was well received.  With the water shutoffs in Detroit, however, Justin feels there is an underlying sense of shame in accepting help and the response from the community at large is more restrained.  When he said this, the first thing that came to my mind was an article I had read years ago by Rebecca Solnit in Harper’s Magazine. In that article, which was later expanded into a book called A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster, Solnit makes an observation humanity opens itself to great compassion and community when a disaster is brought on by weather but this capacity is strikingly less so when that disaster is man-made.

There are many reasons why this water shut-off situation in Detroit came about and I’m not going to go into them, largely because I don’t fully understand how things got to become so dire. I just want to draw attention to the tragic dynamic at hand: as the problems of Detroit grow – due to less people being about to pay for an increasingly costly and crumbling infrastructure, the capacity of the city government to deal with the worsening situation in turn, is also reduced.

What I believe should be of particular interest to us, as librarians, is that there has been a collective response from the philanthropic, non-profit community organizations along with businesses and start-ups to help Detroit through the collection and sharing of city data for the benefit of the city as a whole. Data Driven Detroit does collect and host open data, but it also hosts datasets that are collected from public and private sources as a means to create “clear communication channels back and forth between the public, the government, and city service providers.”

One of the more striking datasets that’s both explorable through a map as well as available for download as open data, is Detroit Property Information through the Motor City Mapping project.  In In the fall of 2013, a team of 150 surveyed the entire city and took photos and captured condition information for every property in the city of Detroit. According to their information at this given moment, of Detroit’s 374,706 properties surveyed, 25,317 are publicly owned structures. Of those, 18,410 are unoccupied, 13,570 require boarding up, and the condition of 2511 of these buildings are so poor that demolition is suggested.
Now, I can only speak for myself, but when I see these kind of projects it makes me want to learn the computer based wizardry that would allow me to do similar things.  Because while I do enjoy the intellectual work that’s involved with computer technology, what really inspires me is this idea that through learning to program, I can gain superpowers that take masses amount of data and do some good with them at the scales of a city.
In short, I want to have to the powers of Tiffani Ashley Bell.  Tiffani heard about the plight of water-deprived Detroiters last July and after being urged on by a friend, she sat down and came up with the core of The Detroit Water Project in about four hours.  The Detroit Water Project pairs donors with someone in Detroit with an outstanding water bill and makes it possible for these donors to directly contribute to their water bill. Since the project started in July, over 8000 donors have paid $300,000 directly towards water bills.
Now, while I think this project is incredibly valuable and very touching as allows donors to directly improve the situation of one household in Detroit, the project admittedly does not change the dynamics involved that gave the grievous situation at hand. 
So what is to be done? How can we combine the power of open data, computer code, and the intention to do good to make more systematic changes?  How can we support and help the residents and the City of Detroit doing the good work that they already do?
This where I think another organization comes in: Code for America.

Code for America believes it can help government be more responsive to its residents by embedding those who can read and write code into the city government itself.  It formed in 2009 and it works by enlisting technology and design professionals to work with city governments in the United States in year long fellowships in order to build open-source applications that promote openness, participation, and efficiency in government.

In other words, it’s a combination of service and app building that is paid for by the city, usually with the help of corporate sponsors. Each year Code for America selects 8-10 local government partners from across the US and 24-30 fellows for the program through a competitive application process.

In 2012, the Knight Foundation and Kellogg Foundation funded three Code for America fellows for a residency in Detroit.  These Code for America fellows worked with the Detroit Department of Transportation to release a real-time transit API and build the TextMyBus bus notification system which launched in September of that year.

In addition to TextMyBus, the fellows also built an app called Localdata to standardize location-based data collected by data analysts and community groups. “Localdata offers a mobile collection tool with a map interface as well as a paper collection option that can be scanned and uploaded for data syncing.” This particular project joined the Code for America Incubator and has since expanded into a civic tech startup company.

In my mind, Code for America can be thought of as a scaled up version of a civic hackathon. If you aren’t familiar with hackathons, they are a generally weekend affair in which participants work solo or in groups to code a website or app that ostensibly solves a problem. Sometimes there are prizes and sometimes the event is designed as a means to generate the first concept of a potential start-up. Hackathons can be a good thing – you might remember from the beginning of my talk that I sometimes help out with them which I means that I endorse them – but I do admit that that have their limits (many of which are described in this blog post behind me).  For one, it’s simply not reasonable to expect that a weekend of hacking is going to result in a wonderful app that will meet the needs of users that the programmers have likely not even met.  But, with good event design that strives incorporates mentorship, workshops, and opportunities to meet with potential users of said apps, hackathons can be a great start towards a future collaborations.

Code for America also incorporates mentorship and training into its process. Those selected for a fellowship begin at an institute in San Francisco where fellows receive training about how local government and agencies work, how to negotiate and communicate as well as how to plan, and focus their future code work.  That being said, Code for America has its own limitations as well. This particular article gently suggests that Code for America may – in some instances – seem to benefit the fellows involved more than the cities themselves.  For one, it costs a city a lot of money – $440,000 – just to support a set of Code for America fellows for a year and then, after they leave, the city needs to be able to have the capacity to support the care and feeding of the open source apps that have been left behind.

Which makes me think.

If only… if only….

If only there were people who could also help cities help their communities who didn’t have to be flown in and disappear after a year. If there was only some group of people who could partner with cities and their residents who already had some experience and expertise in open data licensing, and who understood the importance of standardizing descriptors in datasets, who were driven to improve better user experience, and who understood that data use requires data literacy which demands both  teaching and community outreach.

Friends, this is work that we – librarians can be doing.  And our communities need us. Our cities need us.

Furthermore, I don’t know whether you’ve noticed but every year emerges another an amazing class of passionate and talented freshly minted librarians and we are simply not building enough libraries to put them in.

So I think it’s time to work towards making our own Librarianing for America.

I don’t think it’s possible to model ourselves directly on Code for America. It’s not likely we are going to find cities willing to pay $440,000 for the privilege to host 3 librarians for a year.  At least, not initially. Let’s call that our stretch goal.

We can start out small. Perhaps librarians could participate in one of the 137 Code for America Brigades that bring together civic-minded volunteers to work together via meetups.  There are a variety of other organizations that also draw on civic minded volunteers to work together towards other goals including the Open Knowledge Foundation, hack4good, CrisisMappers, and the Humanitarian OpenStreetMap Team.

Or perhaps we can follow the lead of libraries such as the Edmonton Public Library, York University Libraries, The Chattanooga Public Library, and  the University of Ottawa, who have all hosted hackathons for their communities.
This is a slide that’s admittedly out of context. I took it from a Code for America presentation and I’m not sure how precise this statistic of 75% is to their project and even whether it can be widely applied to all projects. But, I do think it is safe to say that programming code is only as good as its data is clean and meaningful.

And I say this because I don’t believe that librarians have to know how to program in order to participate in Librarianing for America. I believe our existing skillset lends itself to the cause. Our values and our talents are greatly under appreciated by many, many people including librarians themselves.  

But it appears that that the talent of librarians is starting to be recognized. The City of Boston  recently was awarded a Knight Foundation grant for the specific purpose of hiring a librarian as part of a larger team to turn the City of Boston’s many Open Datasets into something findable, usable, and meaningful by its residents.
And perhaps we can learn and expand on the work of ILEAD USA.
ILEAD stands for Innovative Librarians Explore, Apply and Discover, and it is a continuing education program that is supported by grant funding from the Institute of Museum and Library Services and has librarians from ten states who are involved in this program
ILEAD USA gathers librarians together with the goal to develop a team projects over a nine month period through a combination of intermittent face-to-face meetings and online technology training sessions. At the end of nine months, each team presents their project to the entire ILEAD USA audience, with the goal of either sustaining these projects as ongoing library programs or directly applying the knowledge gained from ILEAD USA to future collaborative projects.  
Now when I first proposed this talk, I was unaware of the work of the ILEAD program. And since then I’ve had the pleasure to speak with the its project director on the phone. I asked her if she was familiar with Code for America and she told me no, although she did know about Teach for America. 
I don’t know about you, but to me, ILEAD sounds a little bit like Librarianing for America to me. Or at least it sounds like what one possible form that it could take.

Or it could be that Librarianing for America could be a placement service that matched and embedded librarians with non-profits. The non-profits could gain from the technical and material experiences of the librarian and the librarian would be able learn more about the needs of the community and form partnerships that can only occur when we step outside of our buildings.

I don’t think it’s so far-fetched. Last year, my local hackerspace received three years of  provincial funding to hire a staff coordinator to run a number of Open Data hackathons, host community roundtables and pay small stipends to community members who help in our efforts to making open data from the non-profit community more readily available to the community they serve.

Now it just might be the Frequency Illusion, but I prefer to think it is as if the stars are aligning for libraries and their communities..  At least they appear so when I look up towards our shared horizon. 

Thank you all for kind attention this morning and I very much look forward to spending this day librarianing with everyone here at OnlineNorthwest..  

Be future compatiable

Hmmm, I thought kindly published my last post but did not update the RSS feed, so I made this re-post:On February 1st, I gave a presentation the American Library Association Midwinter Conference in Chicago, Illinois as part of the ALA Mast…