Wednesday, December 9, 2009

Muddiest Point for 12/08 Lecture

I didn't have any muddiest points for this week- all pretty straightforward!

Sunday, November 29, 2009

Week 14 Reading Notes

"Weblogs: their use and application in science and technology libraries"
This article gives a brief and basic overview of what a blog is, included how it started, the different kind of software available as blogging became more popular, and its place in the world of librarianship. I was interested in some of the points this article made, such as the fact that blogging is usually more reliable and surrounded in context than email, since blogs are time-stamped and replies are archived within the context, keeping the record as more of a whole. As an archivist, we focus a lot on the original context of records, and this gave me something to think about, in terms of the authenticity of blogging as a form of record.

"Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons"
I really like the idea of making more use out of wikis in the professional world, including in library settings. I think that libraries are getting to the point where collaboration is going to really have a huge impact on how we come up with and manage our resources. Using wikis, such as how they did at East Tennessee State University, really streamlines a lot of library services, while also encouraging users, like faculty, to get involved and really make more use of what resources the library has to offer.

"Creating the academic library folksonomy: Put social tagging to work at your institution"
The relevance of tagging in the library profession is something we've discussed a lot in our LIS 2000 class. I like the practical approach this article presents to how to go about making a tagging system for use in libraries and research. I was most intrigued by the point that tagging brings attention to a wide variety of "gray literature" that hadn't previously been easily accessible. To me, this sounds like tagging is another way to start exploring the "deep web" and really tap into a wealth of previously inaccessible information.

"Jimmy Wales on the birth of Wikipedia"
I was really surprised by the actual structure of Wikipedia, especially the point where Wales mentioned that the site really only had one full time employee. This is an excellent example of the power of a wiki at work. His emphasis on the importance of quality control is also intriguing; as he points out, in the case of the Bush/Kerry debate, the fight is less between the right and the left, but more about the neutral and the jerks! It's interesting that he recognizes where there is room for contention in Wikipedia, but he also points out that it's one of Wikipedia's big strengths. I especially was impressed by the fact that you can keep pages on a watch list to make sure that they're not vandalized.

Monday, November 23, 2009

Assignment Six

My website can be found at www.pitt.edu/~alt64

Thursday, November 19, 2009

Muddiest Point for 11/17 Lecture

I'm confused about the necessity of empty elements. I reviewed the slides again, and it confirmed what I had in my notes: empty elements do not have content. So, what's the point if there's no content to add an extraneous element?

Tuesday, November 17, 2009

Comments for 11/17 Lecture

I commented on Stephanie's blog at:
http://lis2600sj.blogspot.com/2009/11/readings-week-12-1117.html?showComment=1258485393074_AIe9_BHgJvRWNo8VrWuB5tzPmjwcUD6tGc0U5ZkjSQKPE4jifNBObrc2wugi844ByCkUBtrQ6cUkv0hAxxxoXfDc9ZCRyqyPxC6XqYjb3lTF0xN65xc3KpA1vPfe5xVJGAy-ziKOxWv98IB-zjOABfj9bwXdCQPZ4ZHPPqNNxj9UaypP2aSegvGn1oAMyxNYf90whjJyTDCw-3t8R12i4ozTHe3kzDLqhOLU90ibCGbq5phGHZ869zM#c4043692069941068575

and Christa's at:
http://christaruthcoleman2600.blogspot.com/2009/11/readings-for-week-13.html?showComment=1258485649789_AIe9_BGaABj1ovH-qit9MRWjhYxKUFkcuvBqC6nvWj7kXHCbgHBAQspfY74mKgqq2IdlFLCPvx3G2i3YK6L1imzk9rpflbjeYfDxmmWwrVu9YxIYtrYA1yVOJFswqNpZweVcs8rWZFuOypJ8xd4ctzXF0PxZ5h1eFWGeNH8xoiVbgFxkYVmyV-EHl2C9Ggiy8ratQamzDFqLAVz-40TdWFHexEaI-Yu0TvNEU-A_-jDzHGQC-Ej6oIN86qm800fkTFNNbjH2Spnc#c1248993571133851214

Thursday, November 12, 2009

Reading Notes for 11/17 Class

"Web Search Engines," Parts 1 and 2
In the first section of this series, the author makes the point that there is simply far too much data on the web for every page to be indexed; automatically generated data and updates make the number of pages infinite. He goes on to explain that all major search engines have a similar infrastructure and search algorithm, though the specifics are highly guarded secrets. Part Two explains how web sites are indexed within the search algorithm; information is often indexed according to link popularity, common phrases and words. Since the process can be slow, search engines often take shortcuts to retrieve information quickly, such as skipping parts of the data set and caching the most popular sites (like Wikipedia) for quick returns.

"Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting"
The goal of the Open Archives Initiative (OAI) is to promote standards that support interoperability. To this end, the development of their Protocol for Metadata Harvesting (PMH) sets standards for metadata based on the common standards of the XML, HTTP and DublinCore models, making it easier for institutions to share information without any conflicts over differing systems of metadata. The system has developed fairly effectively, by setting standards, incorporating a good search engine, and allowing data to be effectively processed by crawlers. I think this is a good program for libraries to understand and work with, since cooperation between institutions is a way to save on resources while promoting efficiency.

"The Deep Web: Surfacing Hidden Value"
The essential point of this article is that traditional search engines- Yahoo, Google, etc.- only skim the surface on the web's content. Their search algorithm only returns results with the highest number of links, which is a relatively small portion of the internet. The author shows that the Web is much, MUCH larger (up to 500 times larger) than we believe it to be, but a huge portion of the content is in "deep web" sites that can only be found through direct queries. The author then supports the idea of the BluePlanet search engine, which is specifically designed to search for articles in the deep web. This is an interesting technology, since, as the author says, valuable information is being ignored simply because it cannnot be easily accessed. The BluePlanet search methods may be of interest to libraries simply because it can help extend and deepen results to give more thorough answers to patrons' queries.

Muddiest Point for 11/10 Lecture

I think the digital libraries lecture was fairly straightforward; for the first time in a while, I don't have any muddiest points!