Wednesday, December 9, 2009
Muddiest Point for 12/08 Lecture
I didn't have any muddiest points for this week- all pretty straightforward!
Sunday, November 29, 2009
Week 14 Reading Notes
"Weblogs: their use and application in science and technology libraries"
This article gives a brief and basic overview of what a blog is, included how it started, the different kind of software available as blogging became more popular, and its place in the world of librarianship. I was interested in some of the points this article made, such as the fact that blogging is usually more reliable and surrounded in context than email, since blogs are time-stamped and replies are archived within the context, keeping the record as more of a whole. As an archivist, we focus a lot on the original context of records, and this gave me something to think about, in terms of the authenticity of blogging as a form of record.
"Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons"
I really like the idea of making more use out of wikis in the professional world, including in library settings. I think that libraries are getting to the point where collaboration is going to really have a huge impact on how we come up with and manage our resources. Using wikis, such as how they did at East Tennessee State University, really streamlines a lot of library services, while also encouraging users, like faculty, to get involved and really make more use of what resources the library has to offer.
"Creating the academic library folksonomy: Put social tagging to work at your institution"
The relevance of tagging in the library profession is something we've discussed a lot in our LIS 2000 class. I like the practical approach this article presents to how to go about making a tagging system for use in libraries and research. I was most intrigued by the point that tagging brings attention to a wide variety of "gray literature" that hadn't previously been easily accessible. To me, this sounds like tagging is another way to start exploring the "deep web" and really tap into a wealth of previously inaccessible information.
"Jimmy Wales on the birth of Wikipedia"
I was really surprised by the actual structure of Wikipedia, especially the point where Wales mentioned that the site really only had one full time employee. This is an excellent example of the power of a wiki at work. His emphasis on the importance of quality control is also intriguing; as he points out, in the case of the Bush/Kerry debate, the fight is less between the right and the left, but more about the neutral and the jerks! It's interesting that he recognizes where there is room for contention in Wikipedia, but he also points out that it's one of Wikipedia's big strengths. I especially was impressed by the fact that you can keep pages on a watch list to make sure that they're not vandalized.
This article gives a brief and basic overview of what a blog is, included how it started, the different kind of software available as blogging became more popular, and its place in the world of librarianship. I was interested in some of the points this article made, such as the fact that blogging is usually more reliable and surrounded in context than email, since blogs are time-stamped and replies are archived within the context, keeping the record as more of a whole. As an archivist, we focus a lot on the original context of records, and this gave me something to think about, in terms of the authenticity of blogging as a form of record.
"Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons"
I really like the idea of making more use out of wikis in the professional world, including in library settings. I think that libraries are getting to the point where collaboration is going to really have a huge impact on how we come up with and manage our resources. Using wikis, such as how they did at East Tennessee State University, really streamlines a lot of library services, while also encouraging users, like faculty, to get involved and really make more use of what resources the library has to offer.
"Creating the academic library folksonomy: Put social tagging to work at your institution"
The relevance of tagging in the library profession is something we've discussed a lot in our LIS 2000 class. I like the practical approach this article presents to how to go about making a tagging system for use in libraries and research. I was most intrigued by the point that tagging brings attention to a wide variety of "gray literature" that hadn't previously been easily accessible. To me, this sounds like tagging is another way to start exploring the "deep web" and really tap into a wealth of previously inaccessible information.
"Jimmy Wales on the birth of Wikipedia"
I was really surprised by the actual structure of Wikipedia, especially the point where Wales mentioned that the site really only had one full time employee. This is an excellent example of the power of a wiki at work. His emphasis on the importance of quality control is also intriguing; as he points out, in the case of the Bush/Kerry debate, the fight is less between the right and the left, but more about the neutral and the jerks! It's interesting that he recognizes where there is room for contention in Wikipedia, but he also points out that it's one of Wikipedia's big strengths. I especially was impressed by the fact that you can keep pages on a watch list to make sure that they're not vandalized.
Monday, November 23, 2009
Thursday, November 19, 2009
Muddiest Point for 11/17 Lecture
I'm confused about the necessity of empty elements. I reviewed the slides again, and it confirmed what I had in my notes: empty elements do not have content. So, what's the point if there's no content to add an extraneous element?
Tuesday, November 17, 2009
Comments for 11/17 Lecture
I commented on Stephanie's blog at:
http://lis2600sj.blogspot.com/2009/11/readings-week-12-1117.html?showComment=1258485393074_AIe9_BHgJvRWNo8VrWuB5tzPmjwcUD6tGc0U5ZkjSQKPE4jifNBObrc2wugi844ByCkUBtrQ6cUkv0hAxxxoXfDc9ZCRyqyPxC6XqYjb3lTF0xN65xc3KpA1vPfe5xVJGAy-ziKOxWv98IB-zjOABfj9bwXdCQPZ4ZHPPqNNxj9UaypP2aSegvGn1oAMyxNYf90whjJyTDCw-3t8R12i4ozTHe3kzDLqhOLU90ibCGbq5phGHZ869zM#c4043692069941068575
and Christa's at:
http://christaruthcoleman2600.blogspot.com/2009/11/readings-for-week-13.html?showComment=1258485649789_AIe9_BGaABj1ovH-qit9MRWjhYxKUFkcuvBqC6nvWj7kXHCbgHBAQspfY74mKgqq2IdlFLCPvx3G2i3YK6L1imzk9rpflbjeYfDxmmWwrVu9YxIYtrYA1yVOJFswqNpZweVcs8rWZFuOypJ8xd4ctzXF0PxZ5h1eFWGeNH8xoiVbgFxkYVmyV-EHl2C9Ggiy8ratQamzDFqLAVz-40TdWFHexEaI-Yu0TvNEU-A_-jDzHGQC-Ej6oIN86qm800fkTFNNbjH2Spnc#c1248993571133851214
http://lis2600sj.blogspot.com/2009/11/readings-week-12-1117.html?showComment=1258485393074_AIe9_BHgJvRWNo8VrWuB5tzPmjwcUD6tGc0U5ZkjSQKPE4jifNBObrc2wugi844ByCkUBtrQ6cUkv0hAxxxoXfDc9ZCRyqyPxC6XqYjb3lTF0xN65xc3KpA1vPfe5xVJGAy-ziKOxWv98IB-zjOABfj9bwXdCQPZ4ZHPPqNNxj9UaypP2aSegvGn1oAMyxNYf90whjJyTDCw-3t8R12i4ozTHe3kzDLqhOLU90ibCGbq5phGHZ869zM#c4043692069941068575
and Christa's at:
http://christaruthcoleman2600.blogspot.com/2009/11/readings-for-week-13.html?showComment=1258485649789_AIe9_BGaABj1ovH-qit9MRWjhYxKUFkcuvBqC6nvWj7kXHCbgHBAQspfY74mKgqq2IdlFLCPvx3G2i3YK6L1imzk9rpflbjeYfDxmmWwrVu9YxIYtrYA1yVOJFswqNpZweVcs8rWZFuOypJ8xd4ctzXF0PxZ5h1eFWGeNH8xoiVbgFxkYVmyV-EHl2C9Ggiy8ratQamzDFqLAVz-40TdWFHexEaI-Yu0TvNEU-A_-jDzHGQC-Ej6oIN86qm800fkTFNNbjH2Spnc#c1248993571133851214
Thursday, November 12, 2009
Reading Notes for 11/17 Class
"Web Search Engines," Parts 1 and 2
In the first section of this series, the author makes the point that there is simply far too much data on the web for every page to be indexed; automatically generated data and updates make the number of pages infinite. He goes on to explain that all major search engines have a similar infrastructure and search algorithm, though the specifics are highly guarded secrets. Part Two explains how web sites are indexed within the search algorithm; information is often indexed according to link popularity, common phrases and words. Since the process can be slow, search engines often take shortcuts to retrieve information quickly, such as skipping parts of the data set and caching the most popular sites (like Wikipedia) for quick returns.
"Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting"
The goal of the Open Archives Initiative (OAI) is to promote standards that support interoperability. To this end, the development of their Protocol for Metadata Harvesting (PMH) sets standards for metadata based on the common standards of the XML, HTTP and DublinCore models, making it easier for institutions to share information without any conflicts over differing systems of metadata. The system has developed fairly effectively, by setting standards, incorporating a good search engine, and allowing data to be effectively processed by crawlers. I think this is a good program for libraries to understand and work with, since cooperation between institutions is a way to save on resources while promoting efficiency.
"The Deep Web: Surfacing Hidden Value"
The essential point of this article is that traditional search engines- Yahoo, Google, etc.- only skim the surface on the web's content. Their search algorithm only returns results with the highest number of links, which is a relatively small portion of the internet. The author shows that the Web is much, MUCH larger (up to 500 times larger) than we believe it to be, but a huge portion of the content is in "deep web" sites that can only be found through direct queries. The author then supports the idea of the BluePlanet search engine, which is specifically designed to search for articles in the deep web. This is an interesting technology, since, as the author says, valuable information is being ignored simply because it cannnot be easily accessed. The BluePlanet search methods may be of interest to libraries simply because it can help extend and deepen results to give more thorough answers to patrons' queries.
In the first section of this series, the author makes the point that there is simply far too much data on the web for every page to be indexed; automatically generated data and updates make the number of pages infinite. He goes on to explain that all major search engines have a similar infrastructure and search algorithm, though the specifics are highly guarded secrets. Part Two explains how web sites are indexed within the search algorithm; information is often indexed according to link popularity, common phrases and words. Since the process can be slow, search engines often take shortcuts to retrieve information quickly, such as skipping parts of the data set and caching the most popular sites (like Wikipedia) for quick returns.
"Current Developments and Future Trends for the OAI Protocol for Metadata Harvesting"
The goal of the Open Archives Initiative (OAI) is to promote standards that support interoperability. To this end, the development of their Protocol for Metadata Harvesting (PMH) sets standards for metadata based on the common standards of the XML, HTTP and DublinCore models, making it easier for institutions to share information without any conflicts over differing systems of metadata. The system has developed fairly effectively, by setting standards, incorporating a good search engine, and allowing data to be effectively processed by crawlers. I think this is a good program for libraries to understand and work with, since cooperation between institutions is a way to save on resources while promoting efficiency.
"The Deep Web: Surfacing Hidden Value"
The essential point of this article is that traditional search engines- Yahoo, Google, etc.- only skim the surface on the web's content. Their search algorithm only returns results with the highest number of links, which is a relatively small portion of the internet. The author shows that the Web is much, MUCH larger (up to 500 times larger) than we believe it to be, but a huge portion of the content is in "deep web" sites that can only be found through direct queries. The author then supports the idea of the BluePlanet search engine, which is specifically designed to search for articles in the deep web. This is an interesting technology, since, as the author says, valuable information is being ignored simply because it cannnot be easily accessed. The BluePlanet search methods may be of interest to libraries simply because it can help extend and deepen results to give more thorough answers to patrons' queries.
Muddiest Point for 11/10 Lecture
I think the digital libraries lecture was fairly straightforward; for the first time in a while, I don't have any muddiest points!
Tuesday, November 10, 2009
Comments for 11/10 Readings
I commented on Steph's blog at:
http://lis2600sj.blogspot.com/2009/11/week-10-1110-reading-notes.html?showComment=1257880360226#c2457906710353663044
and Katie's at:
http://katiezimmerman2600.blogspot.com/2009/11/week-9-reading-notes.html?showComment=1257880517209#c4552136567127405323
http://lis2600sj.blogspot.com/2009/11/week-10-1110-reading-notes.html?showComment=1257880360226#c2457906710353663044
and Katie's at:
http://katiezimmerman2600.blogspot.com/2009/11/week-9-reading-notes.html?showComment=1257880517209#c4552136567127405323
Tuesday, November 3, 2009
Reading Notes for 11/10 Class
"Introducing the Extensible Markup Language"
For an introduction, this brief article was a bit dense. What I got from it was that XML essentially serves as the language that separates different elements of a file. It differs from HTML in that it doesn't follow a predefined set of tags; it functions more in the structure of documents than in the content. It defines the boundaries of different parts of the document, making it really the structural integrity of digital documents. I would have a hard time explaining it any better than that...
"A Survey of XML Standards"
This gives a technical review of the most important components of XML. Though this article, I'm sure, does an excellent job of providing access to the standards expected of XML use, I'm not sure I really follow what exactly these different components are. For example, are the catalogs a part of the XML language? I wish this site would have given practical examples; I found the suggested tutorial links similarly dense and difficult to work with. However, I suppose the site does serve its function in describing the ISO approved standards- now, if I could only figure out what it is we're standardizing...
"Extending Your Markup"
A MUCH more helpful article. I liked the straightforward introduction that said XML was essentially used for annotating text, something I obviously didn't understand from reading the first article (see my above notes!). The examples helped; for example, comparing HTML layouts to XML pinpointed how the two systems were different and had different goals. Also, some of the extended capabilities make sense; I see the benefit of using namespaces to make meaningful and unique elements. I also like that this site relates XML to a library setting by showing how you can build bibliographic information clearly.
"XML Schema Tutorial"
I really like the W3schools website- it made learning the basics of HTML pretty painless, and this article was similarly straightforward. Basically, a schema sets up the components of an XML document, essentially serving as the guideline for the document you're creating. It creates a framework of ways in which data can be constructed and described. It's fairly easy to use, and supports various data types, which makes it more multifaceted. However, I'm not really seeing how it's any better to use than DTD; in the page where the two are compared, I don't really see how one is simpler than the other.
For an introduction, this brief article was a bit dense. What I got from it was that XML essentially serves as the language that separates different elements of a file. It differs from HTML in that it doesn't follow a predefined set of tags; it functions more in the structure of documents than in the content. It defines the boundaries of different parts of the document, making it really the structural integrity of digital documents. I would have a hard time explaining it any better than that...
"A Survey of XML Standards"
This gives a technical review of the most important components of XML. Though this article, I'm sure, does an excellent job of providing access to the standards expected of XML use, I'm not sure I really follow what exactly these different components are. For example, are the catalogs a part of the XML language? I wish this site would have given practical examples; I found the suggested tutorial links similarly dense and difficult to work with. However, I suppose the site does serve its function in describing the ISO approved standards- now, if I could only figure out what it is we're standardizing...
"Extending Your Markup"
A MUCH more helpful article. I liked the straightforward introduction that said XML was essentially used for annotating text, something I obviously didn't understand from reading the first article (see my above notes!). The examples helped; for example, comparing HTML layouts to XML pinpointed how the two systems were different and had different goals. Also, some of the extended capabilities make sense; I see the benefit of using namespaces to make meaningful and unique elements. I also like that this site relates XML to a library setting by showing how you can build bibliographic information clearly.
"XML Schema Tutorial"
I really like the W3schools website- it made learning the basics of HTML pretty painless, and this article was similarly straightforward. Basically, a schema sets up the components of an XML document, essentially serving as the guideline for the document you're creating. It creates a framework of ways in which data can be constructed and described. It's fairly easy to use, and supports various data types, which makes it more multifaceted. However, I'm not really seeing how it's any better to use than DTD; in the page where the two are compared, I don't really see how one is simpler than the other.
Tuesday, October 27, 2009
Muddiest Point for 10/27 Lecture
I'm still really confused about how helpful CSS really is. To me it just seems like more work than straight HTML. Or is it just one of those things that makes everything easier once you get used to it?
Assignment 5
Here's the link to my Koha shelf.
http://upitt04-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=26
http://upitt04-staff.kwc.kohalibrary.com/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=26
Week 9 Comments
I posted on Stephanie's blog at:
http://lis2600sj.blogspot.com/2009/10/readings-week-9-1027.html?showComment=1256660339901#c5471872198242146541
and Annie's at:
http://annie-lis2600-at-pitt-blog.blogspot.com/2009/10/week-8-reading-notes.html?showComment=1256660587310#c6062473644739150212
http://lis2600sj.blogspot.com/2009/10/readings-week-9-1027.html?showComment=1256660339901#c5471872198242146541
and Annie's at:
http://annie-lis2600-at-pitt-blog.blogspot.com/2009/10/week-8-reading-notes.html?showComment=1256660587310#c6062473644739150212
Thursday, October 22, 2009
Reading Notes for Unit 8
HTML Tutorial: This website is extremely straightforward and very helpful. It really breaks things down to a simple level (such as when explaining what HTML and markup language is) that makes a previously daunting concept somewhat more approachable. I appreciate that it explains the examples (like how the syntax works) instead of just throwing out codes and hoping you know what to do with them. Plus, the "try it yourself" tool is good for experimenting!
HTML Cheatsheet: A great companion site to use after going through the aforementioned tutorial. This lays out the really common attributes and text tags, so that it shouldn't be too hard to lay out a simple website. The information on frames seems a little beyond my skills at the moment; I think text seems straightforward enough, as are images and links. I am a little confused about how under the "forms" section, it says that you need to run a CGI script to create a form- I'm not entirely sure what that is.
W3 School Cascading Style Sheet Tutorial: I WAS wondering if HTML markup got really extensive and confusing. CSS defines how the elements of HTML are displayed, so it makes it easier to build upon basic elements of HTML for better-looking sites. Unfortunately, this is starting to look ridiculously complicated to me. I understand the basic concept of linking a style sheet into the HTML markup to avoid having to do a lot more work, but the actual mechanics of the process seem a little daunting; the syntax seems a little more difficult. To me (and this may be totally off base), this reminds me of databases, where keys linked information in one spot to another to save space and time.
"Beyond HTML": A Content Management System (CMS) allows users to create sites while focusing on the content rather than the HTML markups of designing the web pages; it provides standards and a framework for editing pages, which is helpful particularly at the institution described because too many people were editing in different styles. This article helpfully describes the entire process of the library trying to establish a CMS system, from deciding what software to use, to identifying templates. The article notes that not all libraries are moving to adopt CMS, but it makes a compelling case that it helps standardize and clean up websites, while helping people who are not as familiar with HTML get used to the editing process. I think this is definitely a field that we should be paying attention to; HTML editing is a useful tool in any situation.
HTML Cheatsheet: A great companion site to use after going through the aforementioned tutorial. This lays out the really common attributes and text tags, so that it shouldn't be too hard to lay out a simple website. The information on frames seems a little beyond my skills at the moment; I think text seems straightforward enough, as are images and links. I am a little confused about how under the "forms" section, it says that you need to run a CGI script to create a form- I'm not entirely sure what that is.
W3 School Cascading Style Sheet Tutorial: I WAS wondering if HTML markup got really extensive and confusing. CSS defines how the elements of HTML are displayed, so it makes it easier to build upon basic elements of HTML for better-looking sites. Unfortunately, this is starting to look ridiculously complicated to me. I understand the basic concept of linking a style sheet into the HTML markup to avoid having to do a lot more work, but the actual mechanics of the process seem a little daunting; the syntax seems a little more difficult. To me (and this may be totally off base), this reminds me of databases, where keys linked information in one spot to another to save space and time.
"Beyond HTML": A Content Management System (CMS) allows users to create sites while focusing on the content rather than the HTML markups of designing the web pages; it provides standards and a framework for editing pages, which is helpful particularly at the institution described because too many people were editing in different styles. This article helpfully describes the entire process of the library trying to establish a CMS system, from deciding what software to use, to identifying templates. The article notes that not all libraries are moving to adopt CMS, but it makes a compelling case that it helps standardize and clean up websites, while helping people who are not as familiar with HTML get used to the editing process. I think this is definitely a field that we should be paying attention to; HTML editing is a useful tool in any situation.
Wednesday, October 7, 2009
Jing Assignment
Here are the links to my screen captures through Flickr. I'm showing how you can personalize your browser through Firefox add-ons.
Screen Capture 1:
http://www.flickr.com/photos/42430590@N07/3991014577/
Screen Capture 2:
http://www.flickr.com/photos/42430590@N07/3991776490/
Screen Capture 3:
http://www.flickr.com/photos/42430590@N07/3991025227/
Screen Capture 4:
http://www.flickr.com/photos/42430590@N07/3991790808/
Screen Capture 5:
http://www.flickr.com/photos/42430590@N07/3991037053/
And here is the link to my video on how to install the same add-on:
http://www.screencast.com/users/laine05/folders/Jing/media/d8cf04a6-c240-4449-8eab-4e4b8934d9a4
Screen Capture 1:
http://www.flickr.com/photos/42430590@N07/3991014577/
Screen Capture 2:
http://www.flickr.com/photos/42430590@N07/3991776490/
Screen Capture 3:
http://www.flickr.com/photos/42430590@N07/3991025227/
Screen Capture 4:
http://www.flickr.com/photos/42430590@N07/3991790808/
Screen Capture 5:
http://www.flickr.com/photos/42430590@N07/3991037053/
And here is the link to my video on how to install the same add-on:
http://www.screencast.com/users/laine05/folders/Jing/media/d8cf04a6-c240-4449-8eab-4e4b8934d9a4
Saturday, October 3, 2009
Week 6 Comments
I commented on Sara's blog at: http://lis2600infotechnology.blogspot.com/2009/09/week-6-readings-computer-networks.html?showComment=1254622845314#c8385267113849035340
and Andy's at: http://issuesininfotech2600.blogspot.com/2009/10/week-six-readings.html?showComment=1254623057043#c1298975999638952783
and Andy's at: http://issuesininfotech2600.blogspot.com/2009/10/week-six-readings.html?showComment=1254623057043#c1298975999638952783
Friday, October 2, 2009
Muddiest Point for 09/29
Can GIF files be either lossy or lossless? The palette selection process sounds like a lossy form of compression because it only chooses 256 colors, but if it's a black/white photo, is it considered lossless because all the colors are included?
Tuesday, September 29, 2009
Reading Notes for Week 6
"Local Area Network" on Wikipedia
This article was kind of dense and technical, but I get the overarching definition of what makes up a LAN. Basically, any small network of computers in a relatively small space can be connected by a LAN. There are varying types of ways to establish this connection, but I'm really only familiar with Ethernet and Wi-fi connections, the two common types mentioned. The article doesn't really mention what the most optimal LAN is, or what the different strengths of each technique are, which I think is what's really important for us to know: what to establish in a library setting. It does mention that cabling is the most common technique, so I imagine it would be the easiest system to implement.
"Computer Network" on Wikipedia
I actually found this article quite helpful, especially its definitions and descriptions of the differences between intranet, extranet and the Internet. I also was surprised to see the various different types of networks; I had only really heard of LANs before, but I suppose that's because I didn't really think about things like global area networks. I did appreciate the description of campus area networks and how the use of routers, switches and hubs directs the connection only to specific buildings, which I thought was interesting. I'm still a little vague on how exactly a switch works, so hopefully we'll bring that up in class a little.
"Management of RFID in Libraries" by Karen Coyle
RFID (which I hadn't heard of before) is basically like a radio transmitted barcode, which is essentially the analogy Coyle gives to describe it. It's an intriguing idea for all the reasons she mentions: potential for self-checkout, security, and making inventory not only easier but more successful. However, I think the strength of this article is that it is remarkably honest; there are simple ways to bypass security measures, there are always worries of technology malfunctions, and patrons might not WANT to have to do everything themselves. What is necessary is an understanding that such technoolgies do exist and, like Coyle states, libraries have to go along with the culture that they serve. I think this is definitely an area to look into, but there is always the chance that this might be another fad that libraries could be better off without.
"Common Types of Computer Networks" on Youtube
This was helpful because it was incredibly straightforward and basic. PANs are the most common networks- ie., your desktop, printer, scanner, and any other items connected to a system for personal use- followed by LANs, WANs, CANs, and MANs. I actually didn't know that LANs are able to do the things WANs used to have to do; I didn't realize Ethernet made LANs that much more powerful!
This article was kind of dense and technical, but I get the overarching definition of what makes up a LAN. Basically, any small network of computers in a relatively small space can be connected by a LAN. There are varying types of ways to establish this connection, but I'm really only familiar with Ethernet and Wi-fi connections, the two common types mentioned. The article doesn't really mention what the most optimal LAN is, or what the different strengths of each technique are, which I think is what's really important for us to know: what to establish in a library setting. It does mention that cabling is the most common technique, so I imagine it would be the easiest system to implement.
"Computer Network" on Wikipedia
I actually found this article quite helpful, especially its definitions and descriptions of the differences between intranet, extranet and the Internet. I also was surprised to see the various different types of networks; I had only really heard of LANs before, but I suppose that's because I didn't really think about things like global area networks. I did appreciate the description of campus area networks and how the use of routers, switches and hubs directs the connection only to specific buildings, which I thought was interesting. I'm still a little vague on how exactly a switch works, so hopefully we'll bring that up in class a little.
"Management of RFID in Libraries" by Karen Coyle
RFID (which I hadn't heard of before) is basically like a radio transmitted barcode, which is essentially the analogy Coyle gives to describe it. It's an intriguing idea for all the reasons she mentions: potential for self-checkout, security, and making inventory not only easier but more successful. However, I think the strength of this article is that it is remarkably honest; there are simple ways to bypass security measures, there are always worries of technology malfunctions, and patrons might not WANT to have to do everything themselves. What is necessary is an understanding that such technoolgies do exist and, like Coyle states, libraries have to go along with the culture that they serve. I think this is definitely an area to look into, but there is always the chance that this might be another fad that libraries could be better off without.
"Common Types of Computer Networks" on Youtube
This was helpful because it was incredibly straightforward and basic. PANs are the most common networks- ie., your desktop, printer, scanner, and any other items connected to a system for personal use- followed by LANs, WANs, CANs, and MANs. I actually didn't know that LANs are able to do the things WANs used to have to do; I didn't realize Ethernet made LANs that much more powerful!
Sunday, September 27, 2009
Week 5 Comments
This week, I commented on Steph's blog at: http://lis2600sj.blogspot.com/2009/09/readings-week-5.html?showComment=1254102281481#c43017651530034031
and Natalie's at: http://introtoinfo.blogspot.com/2009/09/week-5-readings.html?showComment=1254102582136#c5316351853782916061
and Natalie's at: http://introtoinfo.blogspot.com/2009/09/week-5-readings.html?showComment=1254102582136#c5316351853782916061
Thursday, September 24, 2009
Muddiest Point for 09/22 Lecture
I don't know what it is, but I'm really not getting the distinction between foreign and primary keys. I didn't see, in the lecture, why which one in the example was which.
Tuesday, September 22, 2009
Reading Notes for Week Five
"Data Compression" from Wikipedia
While the "theory" section of this article went a bit over my head, I followed the concept of data compression and the two different types pretty clearly. Essentially, data compression is exactly what it sounds like- making data into smaller pieces so that more of it can fit onto the hard disk, conserving space. As the article mentions, there's the downside of extra effort required to decompress the file, but compression seems like a good idea, especially when it comes to library computers, which need to have access to a lot of different files. Lossless compression seems far more preferable, but it seems like there are ways to make lossy compression acceptable. Subtle changing of JPEG pixels could be good for mass online distribution and storage of photos, but it certainly can't be accepted as archival quality.
Data Compression Basics
A more in-depth and practical demonstration of the things talked about in the aforementioned Wikipedia article. The author describes how data compression works, and was nice enough to do it in a step-by-step, mostly uncomplicated sort of way. There were two concepts that particularly jumped out to me. First was the run-length encoding approach to lossless data compression, which works best for high-contrast images (I imagine black-and-white schematics and such would be the best candidate for RLE compression). Secondly, I thought the article really described the video compressing process well, particularly when showing how breaking two scenes into grids and comparing the similar spots make for easy targets of compressable information. This is one of those things I would like to try out, but I'm not quite sure if I actually know how to- at least I've got the theory down!
"Imaging Pittsburgh"
Fun fact- Ed Galloway is my boss at the Archives Service Center. Moving on... This article runs through the process and challenges of creating an online database of images collected from three different institutions. I was most intrigued by the different metadata challenges; it becomes really apparent that some sort of standard is needed, like the Dublin Core model Galloway describes that we read about last week. I also like the idea of organizing the collections by themes, which is a definite advantage to having the pictures online. They can occupy more than one space or "album," making image searching more multifunctional and less constricted.
"Youtube and Libraries"
This is definitely an interesting idea. The library at my undergraduate university had something like this, a virtual tour where some students walked around the library explaining what was on each floor, what our special resources were, etc. It was really useful (at least, for those who viewed it- I'd be interested to see the number of hits), especially in detailing some policies and letting users know things like, "Hey, there's something more than vending machines on the fifth floor!" and, "Oh, we have microfilm?" I like the idea of this as a kind of educational outreach- there's a lot of times where new students can't remember things they heard in a lecture, and find it easier to be told than to look something up to read. I definitely think that the whole trend of reaching into popular media outlets is an innovative and appealing way to get involved in an increasingly digitized campus culture.
While the "theory" section of this article went a bit over my head, I followed the concept of data compression and the two different types pretty clearly. Essentially, data compression is exactly what it sounds like- making data into smaller pieces so that more of it can fit onto the hard disk, conserving space. As the article mentions, there's the downside of extra effort required to decompress the file, but compression seems like a good idea, especially when it comes to library computers, which need to have access to a lot of different files. Lossless compression seems far more preferable, but it seems like there are ways to make lossy compression acceptable. Subtle changing of JPEG pixels could be good for mass online distribution and storage of photos, but it certainly can't be accepted as archival quality.
Data Compression Basics
A more in-depth and practical demonstration of the things talked about in the aforementioned Wikipedia article. The author describes how data compression works, and was nice enough to do it in a step-by-step, mostly uncomplicated sort of way. There were two concepts that particularly jumped out to me. First was the run-length encoding approach to lossless data compression, which works best for high-contrast images (I imagine black-and-white schematics and such would be the best candidate for RLE compression). Secondly, I thought the article really described the video compressing process well, particularly when showing how breaking two scenes into grids and comparing the similar spots make for easy targets of compressable information. This is one of those things I would like to try out, but I'm not quite sure if I actually know how to- at least I've got the theory down!
"Imaging Pittsburgh"
Fun fact- Ed Galloway is my boss at the Archives Service Center. Moving on... This article runs through the process and challenges of creating an online database of images collected from three different institutions. I was most intrigued by the different metadata challenges; it becomes really apparent that some sort of standard is needed, like the Dublin Core model Galloway describes that we read about last week. I also like the idea of organizing the collections by themes, which is a definite advantage to having the pictures online. They can occupy more than one space or "album," making image searching more multifunctional and less constricted.
"Youtube and Libraries"
This is definitely an interesting idea. The library at my undergraduate university had something like this, a virtual tour where some students walked around the library explaining what was on each floor, what our special resources were, etc. It was really useful (at least, for those who viewed it- I'd be interested to see the number of hits), especially in detailing some policies and letting users know things like, "Hey, there's something more than vending machines on the fifth floor!" and, "Oh, we have microfilm?" I like the idea of this as a kind of educational outreach- there's a lot of times where new students can't remember things they heard in a lecture, and find it easier to be told than to look something up to read. I definitely think that the whole trend of reaching into popular media outlets is an innovative and appealing way to get involved in an increasingly digitized campus culture.
Friday, September 18, 2009
Comments for Week Four
I posted a comment on Stephanie's blog at: http://lis2600sj.blogspot.com/2009/09/readings-week-4.html?showComment=1253323217633#c3834207975435733450
and Katie's at: http://katiezimmerman2600.blogspot.com/2009/09/week-4-reading-notes.html?showComment=1253323544040#c7680227820766032384
and Katie's at: http://katiezimmerman2600.blogspot.com/2009/09/week-4-reading-notes.html?showComment=1253323544040#c7680227820766032384
Muddiest Point for 09/15 Lecture
I know we mentioned it a couple of times in class, and it was in our readings, but I still don't quite get what GNU is. Is it a type of operating system, or something else entirely?
Tuesday, September 15, 2009
Reading Notes for Week Four
"Database" from wikipedia.org
A lot of this article seemed to be over my head- I had to repeatedly click on some of the hyperlinks so I could attempt to understand what the article was talking about. What I found most helpful out of the article was the discussion of the different types of databases. I hadn't realized that there were so many different categories, but it made sense when I thought about it more. I personally have experiences with operational databases and external databases. I think that databases are really complex, given the variety of DBMS software out there to use, and a basic understanding of how they function would really benefit librarians in this digital age. Like driving a car, you can know how to search the database, but it really helps to know how it's been developed to be able to work out any kinks.
"Introduction to Metadata" by Anne J. Gilliland
I'm so happy to finally know what metadata is! In fact, it's something I work with on a daily basis. Working at an archive, I'm familiar with all the types of "information about information" that the author describes. In my finding aids, for example, I have to include administrative data (who accessioned the collection, the collection's official citation), descriptive data (the entire description of the collection and its subject), and use data (restriction information). What really interested me was the brief discussion of metadata management- people in the information business certainly make a lot more information in the process! Understanding metadata is important for many reasons that are described in the article- ease of access, proper teaching techniques of how to use the LOC system in schools, etc.- and I think that the concept is essential. In my track as an archivist, most of what I do consists of creating metadata about my collections; this article really helped in explaining the basic concepts and challenges of the topic.
"An Overview of the Dublin Core Data Model," by Eric J. Miller
The Dublin Core Metadata Initiative seems to be primarily interested in establishing a set of standards for metadata descriptions. Miller acknowledges the difficulty of this by stating that there would never be one perfect way to describe everything, especially in a cross-discipline manner. However, in order to facilitate this project, the DCMI is trying to establish specific characteristics that will allow for a more descriptive and standardized metadata system. Miller gives the example of creating a definition for a specific word- in this case, "contributor"- that can be accepted as standard throughout various metadata descriptions. I certainly see both the merits and the difficulties of such a program, but the discussion of the actual architecture of the program is a little confusing.
A lot of this article seemed to be over my head- I had to repeatedly click on some of the hyperlinks so I could attempt to understand what the article was talking about. What I found most helpful out of the article was the discussion of the different types of databases. I hadn't realized that there were so many different categories, but it made sense when I thought about it more. I personally have experiences with operational databases and external databases. I think that databases are really complex, given the variety of DBMS software out there to use, and a basic understanding of how they function would really benefit librarians in this digital age. Like driving a car, you can know how to search the database, but it really helps to know how it's been developed to be able to work out any kinks.
"Introduction to Metadata" by Anne J. Gilliland
I'm so happy to finally know what metadata is! In fact, it's something I work with on a daily basis. Working at an archive, I'm familiar with all the types of "information about information" that the author describes. In my finding aids, for example, I have to include administrative data (who accessioned the collection, the collection's official citation), descriptive data (the entire description of the collection and its subject), and use data (restriction information). What really interested me was the brief discussion of metadata management- people in the information business certainly make a lot more information in the process! Understanding metadata is important for many reasons that are described in the article- ease of access, proper teaching techniques of how to use the LOC system in schools, etc.- and I think that the concept is essential. In my track as an archivist, most of what I do consists of creating metadata about my collections; this article really helped in explaining the basic concepts and challenges of the topic.
"An Overview of the Dublin Core Data Model," by Eric J. Miller
The Dublin Core Metadata Initiative seems to be primarily interested in establishing a set of standards for metadata descriptions. Miller acknowledges the difficulty of this by stating that there would never be one perfect way to describe everything, especially in a cross-discipline manner. However, in order to facilitate this project, the DCMI is trying to establish specific characteristics that will allow for a more descriptive and standardized metadata system. Miller gives the example of creating a definition for a specific word- in this case, "contributor"- that can be accepted as standard throughout various metadata descriptions. I certainly see both the merits and the difficulties of such a program, but the discussion of the actual architecture of the program is a little confusing.
Sunday, September 13, 2009
Week Three Comments
I made a comment on Natalie's blog at: http://introtoinfo.blogspot.com/2009/09/week-3-readings.html?showComment=1252872474430#c1117732529403039864
and Katie's at: http://katiezimmerman2600.blogspot.com/2009/09/week-3-readings-computer-software.html
and Katie's at: http://katiezimmerman2600.blogspot.com/2009/09/week-3-readings-computer-software.html
Saturday, September 12, 2009
Flickr
The URL to my Flickr post is: http://www.flickr.com/photos/42430590@N07/
It's pictures of my cats and guinea pigs!
It's pictures of my cats and guinea pigs!
Wednesday, September 9, 2009
Reading Notes for Week Three
"Introduction to Linux"
I have very cursory experience with using Linux- a lot of my friends are computer people, if you can't tell- and I definitely agree with what the article said- it IS pretty intimidating on first glance. The desktop interface are stylish and the security pros are amazing, but for the average user, Linux systems just seems like it's too much to handle. Basically, I see Linux as an OS you can design to fit your own needs- which is great for the tech-savvy. For a library situation, where patrons can include technophobes and people who just aren't interested, Linux might not be the best OS. However, like any new information technology, it is important to pay attention to, especially the OpenSource software part- that could be quite important in the future!
"What is Mac OS X?"
Sad (?) to say, I think Mac OS X is the modern OS I've had the least experience with (yes, I've played with Linux more than Macs), and I think the reason is something the author states in his summary- the mandatory use of Apple hardware is maddening. Sort of like the complete opposite of Linux (which can operate on basically ANYTHING) OS X seems rather limited. That said, I do recognize the system's merits, however much I may be a Windows user at heart. It is able to allow a bunch of *nix applications for personalization while also running nearly ubiquitous Windows-designed software (Office 08, anyone?). This system is comparable to any Windows unit- people may like it for functionality, interface, or just ease of use. Again, this is a system that I believe people need to understand better; at the library I used to work at, we had six Apples with OS X in my lab, and only one of the desk assistants (myself included!) could do anything more than turn it on. I'm glad I understand the basic concepts of the OS a little better now, though some hand-on tutoring would be helpful...
"An Update on the Windows Roadmap"
Okay, now something I'm a little more familiar with. I've run the beta of Windows 7. It's nice. Yes, Windows does have problems (Linux and OS X aren't flawless, either!), but I think that a lot of these problems come from the fact that it is the widest reaching OS. For example, the article states that Vista supports 77,000 devices- a big number, which admittedly allows room for a lot of error. I do think, however, that Windows is the most user friendly, as it tries to allow for the widest range of applications. However, it is also the most general- far more constraining than Linux or even OS X. That's why the "bottom line" of the article stresses that the designers are working to make the new systems even more compatible with even more drivers and applications. I think this is important to note because it gives a sense of the range of things Windows can do and support; while we can't be expected to know EVERYTHING, librarians should definitely keep up to date with Windows, as it is the most user-friendly and generally most widespread OS available for public use.
I have very cursory experience with using Linux- a lot of my friends are computer people, if you can't tell- and I definitely agree with what the article said- it IS pretty intimidating on first glance. The desktop interface are stylish and the security pros are amazing, but for the average user, Linux systems just seems like it's too much to handle. Basically, I see Linux as an OS you can design to fit your own needs- which is great for the tech-savvy. For a library situation, where patrons can include technophobes and people who just aren't interested, Linux might not be the best OS. However, like any new information technology, it is important to pay attention to, especially the OpenSource software part- that could be quite important in the future!
"What is Mac OS X?"
Sad (?) to say, I think Mac OS X is the modern OS I've had the least experience with (yes, I've played with Linux more than Macs), and I think the reason is something the author states in his summary- the mandatory use of Apple hardware is maddening. Sort of like the complete opposite of Linux (which can operate on basically ANYTHING) OS X seems rather limited. That said, I do recognize the system's merits, however much I may be a Windows user at heart. It is able to allow a bunch of *nix applications for personalization while also running nearly ubiquitous Windows-designed software (Office 08, anyone?). This system is comparable to any Windows unit- people may like it for functionality, interface, or just ease of use. Again, this is a system that I believe people need to understand better; at the library I used to work at, we had six Apples with OS X in my lab, and only one of the desk assistants (myself included!) could do anything more than turn it on. I'm glad I understand the basic concepts of the OS a little better now, though some hand-on tutoring would be helpful...
"An Update on the Windows Roadmap"
Okay, now something I'm a little more familiar with. I've run the beta of Windows 7. It's nice. Yes, Windows does have problems (Linux and OS X aren't flawless, either!), but I think that a lot of these problems come from the fact that it is the widest reaching OS. For example, the article states that Vista supports 77,000 devices- a big number, which admittedly allows room for a lot of error. I do think, however, that Windows is the most user friendly, as it tries to allow for the widest range of applications. However, it is also the most general- far more constraining than Linux or even OS X. That's why the "bottom line" of the article stresses that the designers are working to make the new systems even more compatible with even more drivers and applications. I think this is important to note because it gives a sense of the range of things Windows can do and support; while we can't be expected to know EVERYTHING, librarians should definitely keep up to date with Windows, as it is the most user-friendly and generally most widespread OS available for public use.
Tuesday, September 8, 2009
Muddiest Point for 09/08 Lecture
I really don't have a muddiest point for this week- the lecture was pretty straightforward and I'm already fairly familiar with computer hardware.
Comments for Week Two
I made two comments this week. The blog URLs are:
http://shanlis2600libblog.blogspot.com/2009/09/computer-hardware-muddiest-point-week-2.html#comments
https://www.blogger.com/comment.g?blogID=4008293954130468690&postID=753300361886103126
http://shanlis2600libblog.blogspot.com/2009/09/computer-hardware-muddiest-point-week-2.html#comments
https://www.blogger.com/comment.g?blogID=4008293954130468690&postID=753300361886103126
Saturday, September 5, 2009
Reading Notes for Week Two
"Personal Computer Hardware" at Wikipedia.org
I think this article did a fairly decent job of summarizing the most important parts of computer hardware. I was already somewhat familiar with all of these terms and components since I helped a friend build their own computer, but I found the article most helpful for explaining the differences between the types of media devices. I find this kind of simple knowledge of computer hardware an essential for any modern librarian; since computers are becoming such a huge staple in libraries and research, knowing how a computer is put together is great knowledge for troubleshooting and general understanding of one's tools.
"Moore's Law" at Wikipedia.org and video at scientificamerican.com
I read the article, watched the video, and then read the article again. Moore's Law basically refers to the fact- an observation, not a natural law- that the number of transistors occupying an area with double every two years at an exponential rate. This is rather astounding- as the video stated, this kind of advancement is practically unheard of in other technologies- and points to an interesting dilemma for librarians. Technology is continuing to grow smaller and smaller, advancing at a rate so fast it's hard to keep up. We should be aware of this so that we can keep in mind that technology upgrades are not a one-time occurrence; this exponential growth has not yet reached its plateau, and so information brokers must continually keep pace.
Computer History Museum page at computerhistory.org
I wasn't aware that there even was such a place until I read this article. The museum looks like an interesting but unique place. For one thing, museums generally collect artifacts from hundreds to thousands of years ago. Computer technology has expanded so quickly that an entire facility was opened to contain just the history of less than a century. This speaks again to the same fact as Moore's Law- we in the information business need to keep up! What I found most interesting was the huge difference between the Babbage Engine- the first "computing" prototype made in the 1820s- and then the tiny computers we have today (thanks to Moore's Law). Such striking differences in size and technological sophistication really bring home the point that information technology is a hugely intricate and historically based field of study.
I think this article did a fairly decent job of summarizing the most important parts of computer hardware. I was already somewhat familiar with all of these terms and components since I helped a friend build their own computer, but I found the article most helpful for explaining the differences between the types of media devices. I find this kind of simple knowledge of computer hardware an essential for any modern librarian; since computers are becoming such a huge staple in libraries and research, knowing how a computer is put together is great knowledge for troubleshooting and general understanding of one's tools.
"Moore's Law" at Wikipedia.org and video at scientificamerican.com
I read the article, watched the video, and then read the article again. Moore's Law basically refers to the fact- an observation, not a natural law- that the number of transistors occupying an area with double every two years at an exponential rate. This is rather astounding- as the video stated, this kind of advancement is practically unheard of in other technologies- and points to an interesting dilemma for librarians. Technology is continuing to grow smaller and smaller, advancing at a rate so fast it's hard to keep up. We should be aware of this so that we can keep in mind that technology upgrades are not a one-time occurrence; this exponential growth has not yet reached its plateau, and so information brokers must continually keep pace.
Computer History Museum page at computerhistory.org
I wasn't aware that there even was such a place until I read this article. The museum looks like an interesting but unique place. For one thing, museums generally collect artifacts from hundreds to thousands of years ago. Computer technology has expanded so quickly that an entire facility was opened to contain just the history of less than a century. This speaks again to the same fact as Moore's Law- we in the information business need to keep up! What I found most interesting was the huge difference between the Babbage Engine- the first "computing" prototype made in the 1820s- and then the tiny computers we have today (thanks to Moore's Law). Such striking differences in size and technological sophistication really bring home the point that information technology is a hugely intricate and historically based field of study.
Wednesday, September 2, 2009
Reading Notes For Week One
"Content, Not Containers"
I found the concept of various formats as "containers" an interesting one; I also agree that a huge challenge to libraries is figuring out how to deliver this unpackaged content in ways that attract and serve the community. I also feel that this article did a good job of detailing the place libraries need to occupy in this information-saturated webspace, a role that makes them sources of "authenticity and provenance of content... in an information-rich but context-poor world." (13) With the rise of blogs, wikis, and highly accessible search engines, the job of libraries seems not so much to serve as the sole repository of information, but to provide GOOD information, and to help instruct people who take such vast amounts of information at their fingertips for granted in the proper techniques for utilizing and synthesizing said information.
"Information Literacy and Information Technology Literacy"
Lynch provides a succinct and readable summary of the key concepts of information literacy and information technology literacy. His main arguement is for the spread of information technology literacy, which includes being able to use and understand how computers and their various information systems work. His arguement that everyone should hold a moderate degree of proficiency in information technology reinforces the points made in "Content, Not Containers," and its application for librarians; in order to be adept at the field, librarians need to make a point to actively understand the systems they promulgate and use.
"Lied Library at Four Years"
This article serves as a useful case study in how one academic library approached the various problems that are cropping up at most modern libraries. The main point here is indicated by the title; technology does not stand still, and in order to create a library that effectively meets the needs of its patrons, librarians have to stay abreast of the changing technologies. Again, this point is butressed by the arguements in the other two articles; only by keeping up with new information technologies can libraries serve their purpose. I was particularly struck by how much effort goes into technological advances; at one point, the author states that the librarians had to start planning a year and a half in advance to prepare to install new computers. The point to take away here, I think, is that librarians have to get used to think about technology not as something separate that has to be addressed once a year or so; staying current is an ongoing process that requires attention and attentiveness.
I found the concept of various formats as "containers" an interesting one; I also agree that a huge challenge to libraries is figuring out how to deliver this unpackaged content in ways that attract and serve the community. I also feel that this article did a good job of detailing the place libraries need to occupy in this information-saturated webspace, a role that makes them sources of "authenticity and provenance of content... in an information-rich but context-poor world." (13) With the rise of blogs, wikis, and highly accessible search engines, the job of libraries seems not so much to serve as the sole repository of information, but to provide GOOD information, and to help instruct people who take such vast amounts of information at their fingertips for granted in the proper techniques for utilizing and synthesizing said information.
"Information Literacy and Information Technology Literacy"
Lynch provides a succinct and readable summary of the key concepts of information literacy and information technology literacy. His main arguement is for the spread of information technology literacy, which includes being able to use and understand how computers and their various information systems work. His arguement that everyone should hold a moderate degree of proficiency in information technology reinforces the points made in "Content, Not Containers," and its application for librarians; in order to be adept at the field, librarians need to make a point to actively understand the systems they promulgate and use.
"Lied Library at Four Years"
This article serves as a useful case study in how one academic library approached the various problems that are cropping up at most modern libraries. The main point here is indicated by the title; technology does not stand still, and in order to create a library that effectively meets the needs of its patrons, librarians have to stay abreast of the changing technologies. Again, this point is butressed by the arguements in the other two articles; only by keeping up with new information technologies can libraries serve their purpose. I was particularly struck by how much effort goes into technological advances; at one point, the author states that the librarians had to start planning a year and a half in advance to prepare to install new computers. The point to take away here, I think, is that librarians have to get used to think about technology not as something separate that has to be addressed once a year or so; staying current is an ongoing process that requires attention and attentiveness.
Tuesday, September 1, 2009
Muddiest Point for 9/01 Lecture
Well, to start off my new blog for LIS 2600, I'll begin with what I thought was the Muddiest Point for today's lecture. I seem to be clear on all the ways information technology is helpful to a library, but I guess I'm not seeing how exactly the opposite is true. I suppose libraries are a way to disseminate knowledge about IT, but it seems to me that IT is not really gaining much as compared as to the libraries.
Subscribe to:
Posts (Atom)