Offical blog of the Ontario Library and Information Technology Association

A statement on inclusiveness in STEM programming

The following statement by OLITA Council was also sent directly to Carole-Ann Churcher, CEO of the Timmins Public Library.

The Ontario Library Information and Technology Association (OLITA) recognizes Timmins Public Library as a thriving library that aims to provide diverse programming to community members. From francophone classes to various craft and entertainment shows, it is clear to OLITA Council that Timmins Public Libraries fosters a love for reading and learning amongst all ages.

With this in mind, OLITA Council was saddened to see an online petition circulated on June 30, 2015, where a female patron was placed on a waiting list for a technology program due to her gender. The Assistant Library Director explained to the patron, “boys’ academic and literacy skills don’t improve over the summer break, therefore this program would only be offered to boys.”

The OLITA Council believes the initial policy, as explained in the petition, does a great disservice to children of all genders. While the library program was developed with good intentions, it both discriminates against girls and stereotypes boys. Neither group benefits from the reinforcement of outdated and discouraging gender norms.

The cultural messages girls receive about how they are incompatible with technology is even stronger and in need of addressing. Children who identify as girls or non-binary genders are more generally discouraged and excluded from science, technology, engineering, and math (STEM) fields. Library programs with a technology focus designed only for boys reinforces that message. When creating these types of library programs, it is imperative to make them as inclusive as possible. An excellent example of inclusive gender-based advertising for programming might say “girls and girl-positive allies welcome.” (Even more radical, consider: ‘Girls, grrls, bois, boys, genderqueer and trans children all welcome.’)

Libraries, like our counterparts in education and business, must work to address broad gender discrepancies in STEM fields. The public library is also an ideal place to address the inequity of opportunities that develop from divisions (class, race, socioeconomic) in our society. In addition to making existing programs inclusive, libraries can do even more by offering programming that caters to historically marginalized groups.

Libraries must keep the following principle in mind when creating programming: “If you’re trying to target an underserved group… first look at your staff and make sure there is representation. If no members of that demographic among the staff are actively leading sessions, provide the necessary training to change that.” (From OITP Hacks the Culture of Learning in the Library, ALA Annual 2015, Lisa Peet, Library Journal Online). It is vital for public libraries to work toward developing and maintaining inclusive spaces – sometimes this means making a point of addressing past wrongs.

We ask the Timmins Public Library to review their practices regarding community consultation on event programming and update their library programming policies to prevent similar occurrences in the future. The Ontario Public Library Guidelines, section 4.8, outlines best practices for developing library programming and additional information on determining community needs can be found in Appendix C. This may be an opportune time for Timmins Public Library to engage their community using any of the techniques outlined therein.

This instance has been a rallying event highlighting possible pitfalls in creating Library STEM programs for the community. It would benefit all libraries to take this opportunity to review their programming policies to ensure that their programs are inclusive and meet the needs of their community.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

A Conversation with Mita Williams on Open Data

In preparation for Digital Odyssey coming up this week, I had a quick conversation with one of our presenter’s, Mita Williams. It’s worth a read, or a listen, as the case may be.

The transcript is edited for clarity . The audio can be found below.


Beth: So, the first thing I wanted to ask you was: why open data? Like, what got you interested in it in the first place?

Mita: Oooh, I don’t actually have a good origin story. It was always just kind of in the water because I was interested in open source communities and I was also interested in civic engagement and in making governments better. Open data is where those two worlds collided.  The more time I spent looking at open data the more I kept thinking that this is the sort of work that is in our wheel house. This is what librarians do. Or should do or could do.

Beth: Yeah, there’s a lot of that in the Twittersphere lately just about how the librarian’s specialty is dealing with a lot of information and so these open data sets are something we would, sort of, naturally be a good fit for.

Mita: Yeah, how I talk about it depends on who my audience is. For example if I was talking to scientists, I would say “open data makes for good science because it allows people to replicate results and improve results”. And if I was talking to others I sometimes say that because of digital the scale of our work has grown to include data.

Or you can get into all sort of philosophical questions. When libraries had only printed books their use wasn’t so prescripted. But now that they are digital they come with licensing and the licensing defines what we can do and ultimately, what we are. We’re so constrained by e-book licensing.

Beth: Yeah, for sure. We’re grappling with some of those issues in public libraries as well…

I’m going to stop myself from getting too off track though, and ask for some examples of where you’ve seen open data used well or where its use has affected government? Or policy, or leaders in our communities?

Mita: One of the things I like about open is that there are people using it in ways that you wouldn’t expect them to. Or they respond to it in different ways than you expect. So for example, in Chicago, one of the things they did was put GPS trackers on their snow plows.

But before I get into it, I should state as background that there are some people who are really interested in open government data because they want transparency and accountability. The idea is that it’s harder to hide things like corruption if you make things open.

So back to the snowplows. The benefit of having the snow plows GPS points open and is that it allowed the creation of a real-time map that showed where each of the plows is located. Before this map, there were some people who assuming the worst and when a snow plow was late to their neighborhood they assumed it was because the snow plows were prioritizing more affluent neighborhoods. But with the map, they could see that this wasn’t the case and so the map gave reassurance that a service was working well. Interestingly, it also gave a greater understanding of how the whole system of the city worked.

Beth: That’s really cool. I’d like that in Toronto.

Mita: Yes!  In case of transit, I would like real-time bus data because I think open data can relieve so much of the anxiety of “Did I just miss the bus? Or is a minute away?”

Beth: Yeah. It’s giving us that perspective. Now I’m going to get a little more technical. I’ve heard lately of one or two cases where data sets were being provided in formats like excel or pdf. And, I know this isn’t very useful but I’m basically just sort of guessing why. So, maybe you can give a more educated explanation for that and tell us what formats this data should be in?

Mita:  So, there’s two sides to open data: licensing and data formats. One of the projects that I’m working with is Open Data Windsor Essex. We’re talking to nonprofit groups as many of them do their own research to figure out how they can be more effective on the ground. In doing so they produce these wonderful reports with lots of great unique tabular data in charts. But what happens when someone else in the community wants to reuse that data in the table? If it’s a pdf composed of scanned pages, the pages are basically photographs of documents. And sometimes even if it’s a ‘born’ pdf, you often can’t just cut and paste columns of numbers because the process loses all that tabular formatting.

So if an organization also provides an excel spreadsheet of the contained tables, it goes so far into making their work more reusable in the community. And then, at another level of complexity, if it an organization can present their data in a format where a computer can read it on a regular basis, all of a sudden, they know can provide information as a service. For example, the local health unit might put on their website whether the local beach is closed or not. But if they make this same information available through an API, others could feed this information more widely into other websites, apps, or even twitter bots.

Not to oversell it, but the promise of open data is that if you make your data available both in license and in format, then other people will surprise you with the uses they will find for it.

Beth: Yeah. But that’s the beauty of it kind of.

Mita: Exactly! The hackerspace I’m involved in had an open data hackathon with students. One group took the city data and they turned it into a game. They overlaid hospital and garbage sites onto a map and they turned it into the basis of a start of a zombie game.

Beth: Neat! In your blog you mentioned that you are specifically interested in new software options and platforms that are making web mapping a lot more accessible to individuals and community groups and I work with public libraries so that’s particularly relevant to me. Um, could you tell me a little bit more about those programs?

Mita: Yes! Not that long ago if you wanted to create a map of your city of some sort, you most likely needed to have access to what’s known as Geographical Information Systems software, or GIS software. That’s software is specialized software: powerful and expensive. GIS software can do really amazing complicated things related to spatial analysis. For example, you can give it spatial information about rainfall, about elevation, and soil types, and then if then you put in formulas that relate to each and the GIS can compute and show where there could be a flood zone.

But for a long time, if you wanted to make a simple computer map, there was nothing that you could use. What really changed things was Google Maps and how so many of us have mobile phones which gave us the expectation to find things that were near us. And now there are new mapping platforms to choose from.

Beth: Can you just name one or two of those software platforms?

Mita: Yes I can. So there are companies that provide a platform for mapping that aren’t Google or Bing. One is CartoDB (and one of our keynote speakers at Digital Odyssey will be coming from CartoDB) and another company is called MapBox.

Also, you can now hand-code your maps using simple JavaScript libraries such as Leaflet. Leaflet allows people to, add points, lines, polygons and text to map tiles that you can get from a variety of sources including OpenStreetMap, which is the Wikipedia of maps.

Beth: Are there any major challenges you foresee in open data at the moment?

Mita: The main challenge is still the learning curve. Anything that still requires JavaScript is going to stop people from participating.

Beth: Alright, so my last question was: why should we be excited about your session at Digital Odyssey? Other than that it’s just super practical and hands-on and I think it sort of tackles that learning curve issue a little bit? Like it gives people their first taste of using this software without being overwhelmed? [laughter] Maybe I’ve sold it for you…

Mita: No, that’s exactly it! What I bring to this workshop is that I still remember starting from zero and trying to piece everything together into a larger context. So I’ve designed the workshop so people should get a feel of what is possible and they will definitely be able to make a beautiful map by the end of it.

Beth: Thank you very much!




If you haven’t registered for Digital Odyssey 2015 yet, there is still time! Come listen to Mita and others talk about open data, open heritage.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

The Art of Oral History Collecting

By Susanna Galbraith, OLITA Council Member

At Digital Odyssey 2015: Open Data, Open History, participants have the exciting opportunity to learn about the art of collecting oral histories from experts from the Multicultural History Society of Ontario (MHSO). OLITA council member Susanna Galbraith talked to Cathy Leekam, MHSO Program Manager, about what to expect at this interactive workshop.


Q: Tell us a bit about the Multicultural History Society of Ontario.

A: We are a not-for-profit educational institution and archives in Toronto established in 1976 by Robert F. Harney, a professor at the University of Toronto. He and his colleagues wanted to create a resource that would help to increase public understanding of the multicultural nature of Ontario’s history. We promote the idea of sharing history and cultural experiences; we feel that every cultural tradition brought to Ontario becomes part of our common heritage.


Q: What are some of the unique challenges to collecting and preserving oral histories, when compared with other historical resources?

A: Digitization: The majority of our interviews were recorded in the 1970s and 1980s, using older technology and are reaching the end of their natural shelf life. We are digitizing these recordings in order to preserve them in a more stable format and to make them accessible to a larger audience. This is a costly process that may not be possible for smaller organizations without some kind of support.

Collecting interviews: Selecting and recruiting interview narrators can be a challenge – both reaching and convincing narrators to share their story, and recording their testimony in a format and language that is authentic and accessible to other people.

 Interview training: Finding and training interviewers so that they can prepare and produce strong interviews and create a rapport with the narrators in order to faithfully capture their story.


Q: How do you envision the role of libraries and archives in collecting oral histories?

A: Libraries and archives can recruit participants and promote oral history projects through existing relationships with local community members. They can also provide essential resources that will allow members of the public, students or researchers, to preserve voices and memories that would otherwise be lost to history. They could accomplish this through: training; a venue and equipment for interviews; or a forum to preserve, host and share interviews.


Q: An oral history workshop sounds really fascinating! Can you tell us a bit about what to expect?

A: We will introduce workshop participants to the art of oral history collecting. We will give practical tips on how to undertake an oral history project, using sample clips from the MHSO’s collection that illustrate what to do, and what NOT to do! At the end of the workshop participants will have a chance to practice their interviewing skills.


Digital Odyssey 2015: Open Data, Open Heritage will take place on Friday June 12, 2015 at the George Brown, Waterfront Campus in Toronto. There’s still time to register! Visit to register for Digital Odyssey and access the full program.   

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Learning more about Open Data with Keith McDonald, Toronto’s Open Data Lead

By Jeff Toste, OLITA Council Member

In the lead up to Digital Odyssey 2015: Open Data, Open Heritage on June 12, OLITA council member Jeff Toste spoke with one of our panel speakers, Keith McDonald, Open Data lead for the City of Toronto to learn more about the history of the department and current uses and roles librarians have in the open data realm.


Q: Can you give us a bit of history about the City of Toronto and Open Data? Why did it all begin?

A: Our site’s birthday was on November 2, 2009 but our history predates this when Mark Surman from the Mozilla Foundation was speaking at a City of Toronto Web 2.0 summit in 2008. Toronto’s former mayor, David Miller was also in attendance. Surman emphasized that government had to get in the “sandbox” and “build a city that thinks like the Web” and open up data. Miller got behind the idea and shortly thereafter announced that the city would begin to release its data. To action this, we partnered with the city’s Clerk’s Office to begin releasing data sets. The Clerk’s Office expertise ensured that privacy laws were being followed and our I&T area got the website done. We released around 20 datasets in November 2009 and have grown from there. By 2011 we had created our open data policy and refined the open data license for users. To quote the term “maturity model”: we are still in our public school years – still in “short pants” if you will – and have a ways to go before we graduate. But we are indeed on our way.


Q: I’m pretty new to the idea of open data. How would you explain its use and importance to a newbie like me?

A: The first example of use of open data that we saw was TTC’s live streetcar and bus route real time status. Several apps were developed using this data almost as soon as it was released. Before these apps became available, you would have to stand and read the bus stop post signs, and these, of course, couldn’t reflect any traffic or weather delays. It took six to seven months to get the data from TTC but the apps were developed within days.

From the entrepreneurial side, a local developer, Devin Tu created a web based app called Map Your Property. He started work here in the City of Toronto, and hired a team of developers and coders to work with Toronto’s open data sets. He developed an app that lets people find information to assist with buying, building and developing property.

Finally, there’s a New York City web-based app called RentCheck that was developed by a Canadian and the individual is working to bring the app to Toronto. The app maps rental units, costs and complaints. An app like this gives people who are renting lots more power whereas landlords have been at an advantage pre this kind of information being available in an application and mobile form. In New York City, it’s actually effected change as landlords have been forced to respond to the more level playing field. The app would use much of the data available from our website.


Q: What public feedback have you received from the work you do?

A: The feedback has been generally good. We’ve heard that there is not enough data or it has not been released fast enough. We like to know how we stack up; and we show a willingness for dialogue. When we communicate and discuss openly how we can do better, that gives us marks on the customer service side. Generally though, there have been comments on the state of completeness of data and we’re a little behind when it comes to fuller quantity. For example, in Edmonton there are close to 700 datasets and in Toronto there are just under 200.


Q: Why do we have fewer datasets?

A: There are many reasons – from not having enough staff resources to convert and make the data into machine readable formats, to offering files in multiple formats and offering tools for visualizations to needing more opportunities to sell our message, to even having data collected in a digital format in the first place! Paper is not dead yet!


Q: What do you see as the role of the librarian in the open data movement keeping in mind that librarians work in various settings like public, academic and special libraries?

A: The more information, access, and education that is available, the better. In a city like Toronto it’d be great to have one website for Police, Fire Departments, TTC etc. Right now, the City of Toronto website does carry our own City related data sets and some TTC data but it isn’t cast in stone that we are the clearing house for all City type services such as school boards and libraries. It would also be great to get us all in one room to discuss data and standards. I’d like to see a massive commitment and co-operation to push it out. Right now, it’s staggered; people release data at different times, in different places. Partnerships here are key.

Local librarians can also teach us how to present this information better. For example, we have a dataset on “defibrillators” in City owned buildings but they’re not filed under “D” in our data catalogue index but under “A” for “Automatic External Defibrillator”. We’re not experts in cataloguing and, the librarian could straighten out this quite well I think. And they could look at our metadata so our search engine is more precise. We often use terms and language that makes sense to the staff person but not to the general public.


Q: Related to the former question, what role do librarians have in teaching “data literacy” to the various groups that we serve?

A: It’s a new subject matter, it’s only been 10 years since the term has been in existence. There is also a confusion over the word “open” as we have “open” government, “open” information, “open” data.

There are many types of literacy needed as new technologies and communications develop: Think of the need for media literacy or social media literacy. Data literacy is no different in that people will need, and do need, to understand what data is and what it can offer people. This doesn’t mean everyone needs to know how to make a mobile application but it does mean people should understand that data is being collected and used by virtually every enterprise – from governments to commercial stores and universities to facebook and beyond!

In the early days of releasing our data, it was developers that were the first users but within the last three years we’re getting new groups asking how to use open data. For example, these people may not know how you can use Excel to create diagrams and there are many other programs to render graphs and charts as well. Courses can be offered through the libraries from the simple like making a chart to full on app development. Even data journalism is a viable career option now.


Digital Odyssey 2015: Open Data, Open Heritage will take place on Friday June 12, 2015 at the George Brown, Waterfront Campus in Toronto. Visit to register for Digital Odyssey and access the full program.   

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

OLA Superconference 2016: Call For Proposals

Ontario Library Information Technology Association (OLITA) stream

The theme of the 2016 Ontario Library Association (OLA) Super Conference is “Library Lab: The Idea Incubator”. The OLA’s Ontario Library Information Technology Association (OLITA) stream invites session proposals from from all types of libraries and all kinds of staff.

“This year’s theme explores the unique position of libraries to harness creative energy and incubate ideas. Our communities are brimming with fresh ideas, entrepreneurial start-ups, and experimental partnerships. Libraries serve as environments that catalyze exploration, breed experimentation and encourage curiosity.”

OLA Super Conference | January 27-30, 2016

Submission deadline is Tuesday, May 19th.

Submit your proposal here (

This year OLA will be using ProposalSpace to help you keep track of the status of your proposal. Create a ProposalSpace account to get the process started.

Some topic suggestions:

These are only suggestions—please feel free to submit on any topic or innovative projects you think might be of interest to the OLITA community.

  • Cyber-security / privacy
  • Hands-on practical sessions for people trying to “keep up’ or be inspired
  • Coding: python, ruby, java, css, html
  • Responsive design – mobile optimization
  • Tech practices and policies
  • Big data and open data
  • Linked data and the semantic web
  • Digitization workflows, systems, and challenges
  • Research data management workflows, systems, and challenges
  • Cheap or free software for library tasks
  • Tools for productivity
  • Maker culture and hackerspaces
  • Online education/eLearning
  • Web analytics
  • Digital forensics
  • Web and mobile app development
  • Altmetrics
  • Data curation
  • Social media and communications
  • GIS and mapping
  • Open Access
  • Gaming
  • User experience

As always, if you’re interested in presenting on a topic but would like to share a slot, please indicate in your proposal that you’re interested in being matched up with another presenter.

Questions? Comments? Brilliant suggestions? Get in touch with the OLITA Planners! We’d be happy to discuss your proposal prior to submission.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

The 2015 OLITA Council: we’re listening

In late February, the 2015 OLITA Council met for the first time to figure out what to do with the glorious year ahead of us. New members joining us this year are Sarah Simpkin as Vice-President, and Susanna Galbraith, Kathryn Lee, Beth Mens, and Jeffrey Toste as Councillors-at-Large. It was a pleasure to meet so many new enthusiastic council members, and I look forward to a great year. Some of the activities that we have planned:

  • The Technology Lending Library is getting overhauled; we’re adding some fun new items this year that will give you and your patrons the chance to try out some maker activities.
  • OLITA’s Appetizers–short articles on subjects of particular interest to library information technology–will now be published on Open Shelf. If you would like to contribute an article, please contact Sarah Wiebe with your ideas.
  • As a council, we have also agreed that advocacy and awareness on issues such as open access, information security and privacy is an important role for us. So expect to hear more as issues arise.
  • Another theme that organically emerged from this year’s council is a desire to revamp our communications tactics. We divvied up responsibilities around the OLITA Twitter account, the Planet OLITA aggregator, and agreed to retire the OLITA Facebook page due to limited interest from the membership.

We also want to hear from you. As a council, we want our efforts to go towards addressing the needs of our members. We feel strongly that we will be more effective as a council if we know what you were hoping for when you checked the box to indicate divisional membership in OLITA. And, perhaps just as importantly, why you might not have checked that box. What issues are important to you in the field of information technology that you would like us to address? What do you want to learn about? What would you like us to do that we’re not doing–or what should we stop doing?

So please, engage with us in the comments section; we would really appreciate an open dialogue about what you would like to see from OLITA. You can also email a councillor directly, or submit anonymous feedback.

Coming up:

  • Digital Odyssey will be held in Toronto on June 12th with the theme of Open Data, Open Heritage“. Save the date!
  • Even though it feels like SuperConference 2015 just wrapped up, the call for proposals for SuperConference 2016 will open on March 23, 2015. Details will be shared shortly. This year’s SuperConference planners for the OLITA division will be Steph Orfano and Ana Vrana–a huge thanks to both of them for taking on this responsibility!
  • The Access conference will be held in Toronto from September 8-11. OLITA council members Jan Dawson and May Yan are part of the organizing committee, so you know it’s going to be another great event.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

OLITA AGM at OLA SuperConference 2015

Please join us at the OLITA Annual General Meeting on Wednesday January 28th at 5:30pm in room #204 of the Metro Toronto Convention Centre to meet your new council and your fellow OLITA members.


  1. Call to Order and Welcome

  2. Approval of the Agenda

  3. Approval of Minutes from the last AGM

  4. Treasurer’s Report

  5. Annual Report

  1. New Business/resolutions

  2. Introduction of the new council and year ahead

  3. Hackfest result presentations
  4. OLITA Award for Technological Innovation

  5. Other business

  6. Adjournment

See you there!

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

OLITA Project Award 2015 – nominations open until November 15!

Each year, the Ontario Library and Information Technology Association (OLITA) Project Award is awarded to a project that demonstrates leadership in the application of technology to:

  • benefit library users,
  • enhance library operations, and
  • extend partnerships.

In addition to the recognition of their peers, the recipient of the award also enjoys an expenses-paid trip to the annual CLA conference, courtesy of OCLC Canada, where they will present the project at the annual OCLC / CLA symposium.

We know that there are plenty of library projects in Ontario that use information technology in innovative ways to solve problems faced by the library community, and we want to hear about them!


  • Nominations must be for a project and not an individual.
  • Nominations may be made by the library representative or by others.
  • All projects must be operational by the close of the nomination period.
  • Libraries must be operating within the province of Ontario.

If you are participating in, or know of a project that you think is worthy of consideration for the OLITA Project Award (to be presented at the OLITA annual general meeting during the OLA SuperConference 2015), please submit a nomination by November 15th, 2015.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Join the OLITA council for 2015! Nominations open until November 15.

Joining OLITA council is a fantastic way to shape the ways that library technology is embedded in Ontario Library Association efforts. You’ll collaborate with great people and be able to directly impact the directions that OLITA takes in the future. It’s also a great way to network with other council and board members, vendors, and other OLA members in the province.

We are seeking nominations for the following positions:

  1.  Vice President / President Elect (3 year term)
  2.  Councillor-at-Large (3 year term) – 2 positions

Councillors currently lead exciting initatives such as the Technology Lending Library, coordinating technology-related articles in Open Shelf, assessing OLITA Project Award nominations, and shaping the information technology content at the OLA SuperConference and Digital Odyssey events. In the past few years, OLITA Council has also been instrumental in the development of the OLA Event code of conduct and associated procedures. You will be able to direct your contributions according to your interests and efforts.

The Vice President / President Elect serves as VP for the first year before becoming President in their second year, and finally serving as Past President / Treasurer in the third year. During your first two years you will be a member of the OLA Board, and you will help coordinate the overall activities of OLITA.

Find information and the nomination form on our Election Information page. Board/Council experience is not necessary and you don’t have to live or work in the GTA. In fact, the only criteria you’ll need is to be an OLA member and be interested in technology aspects of library work. New professionals are also welcome to get involved.

Don’t be shy: nominate yourself (or a colleague who is interested in running) for a Councillor or VP position today!

Nominations due November 15th, 2014

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.

Linked Data Pt 2 – Resource Description Framework (RDF)

This is the second part of a three part Appetizer on linked library data.  If you are new to linked data please take a couple of minutes to read part 1. In the first part of this appetizer I introduced you to some of the concepts behind linked data. In part 2 we will look at some of the web standards that make linked data work:

  • Model data using RDF
  • Identify everything using a uniform resource identifier (URI)
  • Use a common web format such as XML, JSON etc.

Model data using RDF

In part 1 we talked about using structured data so that machines can make use of it. We identified several pieces of information about Margaret Atwood that I can turn into statements:

  • Margaret Atwood is a person
  • Margaret Atwood’s name is “Margaret Atwood”
  • Margaret Atwood was born 19391118
  • Margaret Atwood was born in Ottawa, Ontario
  • Margaret Atwood is a novelist
  • Margaret Atwood wrote “The Handmaid’s Tale”

If you look closely at the statements I have created about Margaret Atwood you can break them up into 3 parts:

  1. Margaret Atwood (the resource we are interested in)
  2. Was born in (the relationship between the resource and something else)
  3. Ottawa, Ontario (the thing that describes the resource)

Resource Description Framework (RDF) Triples diagrams

In the model behind linked data, RDF, this statement is called a triple. Each triple, or statement, is made up of 3 pieces: subject — predicate–object. The subject is the resource we are focused on, the object is what we are saying about the resource, or our description of the resource, and the predicate explains the relationship between the resource and what we want to say about it.

RDF Data as a Web of Data

Showing RDF triples as a web of information rather than a table


The statements about a particular subject, Margaret Atwood, can be visualized as a web of information about that resource. The statements all relate through the subject Margaret Atwood but are otherwise independent of each other unlike in a database where several pieces of information about a resource are commonly collected together in a record, for example in a MARC authority record.

The neat thing about this web of information is that an object in the Margaret Atwood web, for example Ottawa, can be the subject of another web in your data set.

Two webs of triples joined together

Two webs of triples joined together

For example, we may know that:

  • Ottawa is the capital of Canada
  • Ottawa has a population of  883, 391
  • Ottawa is the birthplace of Bruce Cockburn

By joining the object Ottawa from the data around Margaret Atwood to the resource Ottawa as a subject in its own right we can follow different paths to learn more about a resource. For example, we now know that Margaret Atwood was born in the capital of Canada and we gain some information about other artists who were born in the same place.

This is nice within your own data set; you can pivot on your information from different perspectives. However, the point of linked data is to connect, or allow the potential for connection, with other data sets. For example, DBpedia makes connections to data sets such as GeoNames and the New York Times linked data pages which can allow mash-ups of descriptions about resources. How does a machine make these connections?

Identify Everything with a URI

A linked data web is not just a web of documents it is a web of things. That is, we don’t just have a URL for a page about Margaret Atwood, we also need to identify Margaret Atwood herself as a resource. We identify resources by giving them a uniform resource identifier (URI) which is generally in the form of an HTTP URL so that a machine can go to the location and find out more about the resource, such as the resource type and relationships to other resources. Ideally the URL returns information in RDF. Using the HTTP standard and a standard RDF model allows a program to find connections to other related data without needing to know the specifics of many different application programming interfaces (APIs).

RDF statements using URIs instead of strings


In the statement “Margaret Atwood has birth place Ottawa” we can represent each part of the statement with a URI in DBpedia (in this example I have used the human-readable URLs):

  1. for Margaret Atwood
  2. for birth place
  3. for Ottawa

In some cases it doesn’t make sense to create an identifier for a piece of information, for example a birth date or a title. However, you can specify information to let the program know date format or language of the title.

Using Vocabularies

While there are some formal differences between schemas, vocabularies and ontologies, I’m going to use the term vocabulary for simplicity. In the example above we stayed within the DBpedia vocabulary but in RDF you can mix and match vocabularies in one statement. In fact it is encouraged to use common vocabularies. For example, when linking the DBpedia resource “Ottawa” to the GeoNames resource “Ottawa” DBpedia uses a property from the Web Ontology Language (OWL): owl:sameAs. This is a really commonly used relationship to link data sets together:

Ottawa (DBpedia) is the same as Ottawa (GeoNames) — —

Similarly rather than defining in a DBpedia what a “homepage” is DBpedia simply uses the Friend of a Friend (foaf) vocabulary which has already created a property for homepage.

Using a Standard Format

Finally you need to publish these RDF statements in a format that is commonly used on the web. In order for programs to use these statements about things they need to be serialized in a way that a program can understand. There are many different ways to serialize RDF including XML, Turtle, JSON, and JSON-LD. Any of these formats should lead to the same triples. If you are on a linked data site, such as a DBpedia page or the New York Times linked data pages, look for a link to different serializations or try entering .rdf at the end of the URL.

Here is an abridged version of the N3/Turtle serialization of the DBpedia resource Margaret Atwood. I’ve emphasized a couple of areas for comment:

@prefix owl: <> .
@prefix dbpedia:  <> .
@prefix ns5: <> .
ns5:N10507958644473712303    owl:sameAs dbpedia:Margaret_Atwood .
@prefix foaf: <> .
@prefix ns16: <> .
ns16:Margaret_Atwood  foaf:primaryTopic dbpedia:Margaret_Atwood .
@prefix rdf: <> .
dbpedia:Margaret_Atwood rdf:type dbpedia-owl:Person ,
dpedia-owl:Writer ,
foaf:Person .
@prefix rdfs: <> .
dbpedia:Margaret_Atwood rdfs:label “Margaret Atwood”@en ;
<> .
dbpedia:Margaret_Atwood        dbpedia-owl:birthDate    “1939-11-18″^^xsd:date ;
dbpedia-owl:birthPlace    dbpedia:Ottawa ;
dbpprop:placeOfBirth    “Ottawa, Ontario, Canada”@en ;
dbpprop:website    <> ;
foaf:surname    “Atwood”@en ;

In the bold areas you can see that the New York Times resource “N10507958644473712303” is the same as the DBpedia resource Margaret_Atwood and that the DBpedia resource Margaret_Atwood has the type “Person.”

The web is a messy place and not all data can be nicely formatted and interconnected but in the next appetizer on linked data I will talk about some of the ways linked data can be used by libraries and why libraries may want to publish their own data to the web.

Some Resources

Tim Berners-Lee (2009) Design Issues. Linked Data (open access)
Tim Berners-Lee (2009) The next web. A TED talk, February 2009.  (open access)
Karen Coyle. Understanding the semantic web: bibliographic data and metadata. Chicago: American Library Association, 2010 (Library Technology reports ; v. 46, no. 1) (subscription required)
Tom Heath and Christian Bizer (2011) Linked Data: Evolving the Web into a Global Data Space (1st edition). Synthesis Lectures on the Semantic Web: Theory and Technology, 1:1, 1-136. Morgan & Claypool.  (open access) (2009) Introducing Linked Data and the Semantic Web  (open access)
W3C Working Group (2014) RDF 1.1 Primer.

Slide images are from a presentation I did for a University of Waterloo IST Friday morning seminar in December 2013.


Thank you to Dan Scott, Corey Harper and MJ Suhonos who all answered questions for me related to this post! I appreciate everyone’s patience. Thanks also to Dan for help with proofreading.

This content is published under the Attribution-Noncommercial-Share Alike 3.0 Unported license.