Friday, November 14, 2014

Muddiest Point from Nov. 11 Class

Are there any good sites to go to (other than the ones in the required readings) that can help trouble shoot HTML and CSS coding problems that might come up in the assignment?

Nov. 18 Required Readings

1. Paepcke, A. et al. (July/August 2005). Dewey meets Turing: librarians, computer scientists and the digital libraries initiative. D-­‐Lib Magazine. 11(7/8). http://www.dlib.org/dlib/july05/paepcke/07paepcke.html

Seeing that this article is from I'm hesitant believe that much of what the author is describing is particularly relevant. The content is very interesting and I wonder what a more current article on the topic would reveal.

This article discusses the intersection of the expectations and opinions of librarians and computer scientists on their collaboration of digital library initiatives. Both groups were very excited about the opportunities presented to them from the emergence of digital libraries. Computer scientists were excited for the relief between conduction research and impacting day to day society. Librarians were excited to offset some burdening costs and using information technologies to ensure the library's impact on scholarly work.

The article goes on to discuss the interesting union of computer scientists and librarians. It also discusses how this union changed the face of libraries and publishing in many ways. Both librarians and computer scientists were very effected by the emergence of the world wide web and I might try to find an article that discuss the current standings of the two groups on this topic since this article interested me so much.

2. Lynch, Clifford A. "Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age" ARL, no. 226 (February 2003): 1-­‐7. http://www.arl.org/storage/documents/publications/arl-br-226.pdf


This reading covers the emergence of institutional repositories as a new strategy that allows universities to apply serious systematic leverage to accelerate changes taking place in scholarship and scholarly communication.

An institutional repository is a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members. Its essential goal is that of long term preservation of digital materials.


Scholarship and scholarly communication have radically changed with the implementation of new technologies and digital repositories are a popular response for the university and university library to stay relevant. No two digital repositories are the same and many most continue to adapt to stay current with the constant technological updates and subsequent effects on how scholars communicate.

3. Hawking, D. Web Search Engines: Part 1 and Part 2 IEEE Computer, June 2006. 
Part 1: http://web.mst.edu/~ercal/253/Papers/WebSearchEngines-1.pdf


This article explains how data processing of web indexing works with an emphasis on the search tools and portal search interfaces that make it happen. The actual processing of crawl the web to be able to index it is very in-depth web data of about 400 terabytes. such processing requires a good network infrastructure and servers that can handle the work load.

Crawling also requires algorithms to visit the sites and determine if it has been already seen and this is done with a queue initialized by one or more seed URL. These crawls are continuous so that responses to queries are quick and updated.


The reading goes on to explain spamming and how search engines must maneuver through web landscapes of domains, servers, links, and pages to avoid spammers calculated techniques to get through.
Part 2: http://web.mst.edu/~ercal/253/Papers/WebSearchEngines-2.pdf

Part two of the reading spends more time explaining the indexing process that results from the crawl. Much like crawling, indexing requires special algorithms for scanning and inversion. Indexing keeps track of all of the documents that resulted in a crawl and organizes them for easy retrieval for a query. There are different steps or options to the process such as: scaling up, team lookup, compression, phases, anchor text, link popularity score, and query-independent score. There are other algorithms involved, such as query processing algorithms which is the most common type of search engine queries. The reading also explains problems with queries such as poor search results. Quality, speed, skipping, early termination, and caching are all was to help to get better search results.

4. Shreeves, S. L., Habing, T. O., Hagedorn, K., & Young, J. A. (2005). Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends, 53(4), 576-589.

This reading discusses the Open Archives Initiative Protocol for Metadata Harvesting and gives an overview of the OAI environment. The focus of the topic seems to be on community and domain specific OAI services and how metadata plays into them.  For example, metadata variation and metadata formats are two challenges that exist for the communities. Metadata variation is the normalization of a subject element with many different controlled vocabularies that are used by different data providers. Metadata formats include the many new formats like adding new paths to the processing routines of data. The reading does mention that the OAI community does have a future if guidelines are created to address the issues and problems.     

Friday, November 7, 2014

Muddiest Point from Nov. 28 Class

I know I'm going to be asking about next weeks lecture- but the readings for next week talk a lot about embedding DTD in XML systems or keeping it separate but linking it. How does that work? I really can't picture this in my head.

Nov. 11 Required Readings: XML

1) Martin Bryan.  Introducing the Extensible Markup Language (XML): http://www.is-thought.co.uk/xmlintro.htm    
     I found this reading to be hard to follow because of all of the coding jargon and because I knew nothing about XML before reading it. I also feel that the reading could've been organized a little bit better, but the follow are the main points I took away from the reading:
An XML file normally consists of:
1. an XML processing instruction
2.A document type declartion
3. A fully-tagged document instance with matching element type name to document type name
  • XML- a subset of the Standard Generalized Markup Language that is designed to make it easy to interchange structured documents over the Internet
  • After defining the role of each element of text in a formal model (DTD), users of XML can check that each component of document occurs in a vaild place within the exchanged data stream- DTD is not required and if no DTD is available an XML system can assign a default definition for undeclared components of the markup.
  • XML is not a standardized way of coding text- it is a formal language that can be used to pass info about the component parts of a document to another computer system-XML is flexible enough to be able to describe any logical text structure
  • To use XML- users need to know how the markup tags are delimited from normal text and in which order the various elements should be used in- systems that understand XML can provide a list of vail elements and will add the required delimiters. When the system does not understand XML users can enter tags manually for later validation
  • Elements- are entered between matched angle brackets
  • Entity references- start with & and end with ;
  • A Document Type Definition (DTD) must be created to define tag sets
  • If attributes of elements are not defined with a start tag in XML the program will use default elements
  • Commonly used text can be declared within the DTD as a text entity 
  • XML provides many techniques for special elements- usually notation declaration is required to tell the program what to do with the reference files's unparsed data 

3) Extending you Markup: a XML tutorial by Andre Bergholz: http://xml.coverpages.org/BergholzTutorial.pdf
      This article did a much better job of explaining XML jargon and made the first reading much easier to understand. Unlike the first reading, this reading defines and explains acronyms before using the acronyms through out the reading. The first reading is so much more confusing than this one that it shouldn't be a required reading.

But what I took away the most from this reading includes:
  • XML- is semantic language that lets you meaningfully annotate text.
  • XML documents look a lot like HTML documents
  • XML elements can be nested and attributes can be attached to them- attributes must be in quotes and tags must be balanced or explicitly close
  • DTD's define the structure of XML documents- users can specify the set, order, and attributes of tags
  • When an XML document conforms to its DTD its called valid- a DTD can be included in the XML or contained in a separate file
  • DTD elements- can be nonterminal or terminal and DTD attributes can have zero to many attributes- attribute definitions don't impose order on when the attributes occur-DTDs expressive power is limited
  • XML extensions let you link more than one source
4) XML Schema Tutorial http://www.w3schools.com/Schema/default.asp  
     Just like all the previous tutorials- at first I was a little lost, but I caught on. I liked how this week we read readings about the code first and then did the tutorial.

Friday, October 31, 2014

Muddiest Point form Oct. 28 Class

With as extensive as HTML code is- do web designers/coders memorize HTML or is there a go to site or book that they use?

Nov. 4 Required Readings: Cascading Style Sheet

1) W3 School Cascading Style Sheet Tutorial: http://www.w3schools.com/css/ 
     I thought this tutorial did a better job of explaining the coding style and what I can do in the tutorial than last weeks HTML tutorial. This site also tried to make it a little more fun with offering a quiz. But I really liked the CSS examples it gave.

2) CSS tutorial: starting with HTML + CSS http://www.w3.org/Style/Examples/011/firstcss 
     I thought that the local links this tutorial provided were excellent. They did a really good job of breaking the process down step-by-step so I could understand all the layers of creating the page using the code.

3) chapter 2 of the book Cascading Style Sheets, designing for the Web by HÃ¥kon Wium Lie and Bert Bos (2nd edition, 1999, Addison Wesley, ISBN 0-201-59625-3) http://www.w3.org/Style/LieBos2e/enter/
     CSS works with HTML to create a web page/sheet/document. But CSS gives the creator a little bit more editorial control and allows the creator to be more creative over the end result. But for CSS to work, it must be used in a browser that supports CSS. Even in the right browser- there will be bugs and limitations

Some key terms from the chapter:

CSS rule- a statement about one stylistic aspect of one or more elements -- a rule consists of two parts: 
Selector- the part before the left brace that links the HTML document and style
Declaration- the part inside the braces that sets forth the effect. The declaration has two parts separated by a colon- the property and value
CSS stye sheet- is a set of one or more rules that apply to an HTML document-- for it to affect the HTML document it must be glued- for example, you can put the style sheet inside a style element at the top of the document.

What I really liked about this reading were the explanations of common tasks. I liked how easy the explanations were to understand and that the author showed an example of each task.

Friday, October 24, 2014

Muddiest Point from Oct. 21 Class

I'm hesitant if my post for the first two required readings for the upcoming week are enough. I wasn't sure what to write since it was just "practice" and no content to summarize or review. Were there specific expectations to write more about these first two readings? Or just write about our thoughts?

Oct. 28 Required Reading: HTML and Web Authoring Software

1) W3schools HTML Tutorial: http://www.w3schools.com/HTML/ 
     I thought that this link was really cool, but with very little instruction or explanation on what the tutorial expected or allowed me to do- I was a little lost.


2) HTML Cheatsheet http://www.webmonkey.com/2010/02/html_cheatsheet/
     For some reason I had a lot of trouble with being able to consistently access this link. On the same laptop I would attempt to open it and an error would occur half of the time and the other half the link opened fine.
     When I was able to open it- it was very helpful and actually helped make sense of what I could do in the HTML tutorial link.
 

3) Pratter, F.E. (2011) Introduction to HTML, Chapter 2 of Web Development With SAS by Example, 3rd Edition (Google Book) http://books.google.com/books?id=l_MFZYMv3YgC&pg=PA15&lpg=PA15&dq=introduction+to+html+pratter&source=bl&ots=nXRgMFYZHz&sig=muV0UY1c_ePZO1pcdu8_V_IdbwQ&hl=en&sa=X&ei=Mvs4ULG9O4Gf6QG8h4GICw&ved=0CC0Q6AEwAA#v=onepage&q=introduction%20to%20html%20pratter&f=false 
     This article did a great job of providing examples of how to write HTML and then showing the resulting webpage view of the code. It is definitely a good reading to start with before beginning to play around with HTML code. The content was dense and difficult to follow at times but the main points I pulled out of the reading include:

  • All markup languages are tags to annotate the document content- HTML has a short list of standard tags that you need to learn in order to use HTML
  • HTML has a lot of repetition- writing it can be tedious
  • Automated process called IDE exists to save time
  • The best way to learn to write HTML is by viewing examples and a lot of practice
  • XHTML- difference from HTML is that all elements must be in lowercase, having closed tags, nest properly, and attributes must be quoted
  • XHTML must also conform to DTD for XML-based Webpages- so you cannot use formatting instruction in your pages
  • HTML tags must be nested and the standard is that all tags should be lowercase
  • Images can be generated using: GIF, JPEG, PNG, TIFF, and BMP




4) Goans, D., Leach, G., & Vogel, T. M. (2006). Beyond HTML: Developing and re-imagining library web guides in a content management system. Library Hi Tech, 24(1), 29-53.  
     This reading focused on the Georgia State University Library, subject liaison librarians, and the content management system designed to manage 30 web-based research guides.
      It explains how 15 liaison librarians were developing and had complete editorial control of web guides for their subject fields at the University. As a result there was no consistency between the guides and a lack of training was resulting in poor quality.
     Because these issues the library hired a web development librarian. Standard's were implemented, web content was improved, and the web presence was managed all while the web content continued to grow. As a result the library now had an official library content management system to make the guides and web content more accessible to users/students.
The reading does a great job of explaining the different aspects, options (open access or not), and importance of a content management system in a library aspects such as the content, control, customization, and complexity of the CMS underscore the importance of such a system and how it cannot be simply defined as just a library repository.

Friday, October 17, 2014

Oct. 21 Required Readings: Internet and WWW Technologies

1) Tyson, Jeff. How Internet Infrastructure Works.  http://computer.howstuffworks.com/internet/basics/internet-infrastructure.htm

  • No one owns the internet but it is monitored and maintained- The Internet Society est in 1992 oversees the formation of policies and protocols that define how we use the internet
  • All computers with internet connect to an Internet Service Provider (ISP) regardless of the type of network or connection- The ISP is a network that connects to a larger network which connects to another network and so on and so forth creating the internet!
  • There is no controlling network- just several high-level networks connection to each other through Network Access Points (NAPS)
  • Networks rely on routers to "talk to each other"
  • IP Address/internet protocol- is the language that computers use to communicate over the internet- consider 32-bit numbers
  • URL stands for uniform resource locator which contains the domain name (human-readable domain name v machine readable IP)
  • Root DNS servers handle billions of url/ip address request and are the reason the internet runs so smoothly-reduncy is the key to DNS servers success
Overall I liked this readings. It covered all the basics in a very easy to understand language. It was, however, a long read so it will definitely be an article I refer back to for information I didn't remember.

2) Andrew K. Pace “Dismantling Integrated Library Systems” Library Journal, vol 129 Issue 2, p34-36. 2/1/2004 http://lj.libraryjournal.com/2004/02/ljarchives/dismantling-integrated-library-systems/

     I hesitate to take much of this article as applicable to today's "library land" issues as the article is ten years old and I am sure much has changed on the ILS front in that time. I would hope that in the last ten years from when this article was first written, that solutions addressing interoperability and software issues have been found. I also struggled with some key terms and references to library software. It seems this article requires a bit of working library background and jargon to be able to follow the author smoothly.
     I would also make the argument that this "struggle" to integrate new technology but still provide the same level of library service to patrons is not a new one. Librarians pre Gutenberg Press had to deal with the radical shift from scribes to printing presses and the mess that came along with it. And a more recent example is the inception of the typewriter and the technology changes that it brought along. Librarians have to reason to struggle and fuss over this drastic, large scale shift into a new digital age. But they need to remember that they have been doing this sort of thing for a looong time and there is no reason to doubt their capabilities.

3) Sergey Brin and Larry Page: The genesis of Google (Inside the Google machine). http://www.ted.com/talks/sergey_brin_and_larry_page_on_google

     I wonder if we were to see the globe searches and bit travel mapping would be any different today since this video was filmed in 2014. I'm sure access and availability has increased in ten years time. It's interesting to me that Google mentions wanting to grow their company with more searches and to do that the have invested in charities and grant programs under Google Foundations.
     When they talked about the over 100 Googlette projects and issues of staying organized. I immediately thought that librarians would't have this problem! Prioritizing and organizing a large list of projects is exactly what we are trained to do. But taking the initiative to start over 100 different innovating projects at one time, with no guarantee of success, that is what librarians don't always do. And it is a problem Google jumps in starts something even if it fails. And that's why Google beat libraries to the punch when it comes to digital preservation and access to information with Google Books. Money is the major dividing factor between Googles innovations versus library innovations. Libraries do not have the same capital to play with that Google does.
     Also, Google better have librarians working on the research end of Google Answers!

Friday, October 10, 2014

Muddiest Point from Oct. 7 Class

Is there a way to do an advanced search in Google Scholar to only get conference papers as a search result?

Friday, October 3, 2014

Oct. 7 Required Readings: Computer Networks & Wireless Networks

1. Local Area Network
     A local area network or LAN is a computer network that interconnects computers in a limited space, such as an office, campus, or home. The purpose of its development in the 1980's was to cut costs by allowing more than one computer share expensive storage and printers. Ethernet and Internet Protocol are the most popular modes for the network. LANs include network devices such as switches, firewalls, routers, load balancers, sensors, and the like. Another feature to note is that LANs can maintain connection with other LANs.
     Overall I felt that this reading complicated a very simple concept because of how poorly written the wikipedia entry was. And I am certain that there are much more well-written readings from much more authoritative sources about LANs that could have been more beneficial for the class to read about LANs

2. Computer Network
     As a whole I thought that this wikipedia reading was the most well-written so far, but as mentioned earlier, I am sure that are much better written readings out there. Either way, I appreciated how the reading discussed the Darknet and some of its properties. Even though I really know nothing about how the darknet works, I still think it's really cool. I also thought that it was very relevant how the reading mentioned current issues of privacy and mass surveillance.
     But the reading manly discussed computer networks and the data connections that comprise them. Computer networks support access to the web, share storage, servers, printers, fax machines, email, and instant messaging between computers within the network. Computer networks are the core of modern communication. There are many ways to link devices in a network, two of the most popular are wired and wireless. Wireless is the most popular for its speed advantages. Information is carried through the network in network packets.

3.Coyle, K. (2005). Management of RFID in libraries. Journal of Academic Librarianship, 31(5), 486-489.
     I feel that this article was a little too outdated to be considered relevant for technology in the library science field. I understand that RFID chips still offer a lot of advantages for libraries and their collections, but with the current shift to eResources and subsequent reduction in physical collection material- there are probably more relevant technologies. And I'm sure RFID have evolved since 2005 and can offer many more functions than the article describes.
     But, when the article was written there was some controversy over switching to RFID chips. Even though they offer many benefits and functions, the disadvantages, privacy issues, and constant evolution of the technology caused librarians to hesitate integrating the technology into their collection. The RFID is a chip that would be taped to the book and would act as an identifier for circulation purposes. The chips can hold large amounts of information and are read like a barcode with an electro-magnet. The chips main advantage is its inventory and security functions for physical materials in the libraries collection.

Tuesday, September 30, 2014

Muddiest Point from Sept. 30 Class

I'm a little confused about the review before the lecture about MARC being replaced with a new system, which I know nothing about- Shouldn't we be learning more about it? Also, it was discussed that we as students- who are about to enter the field, we don't need to know how to manage and create databases but only know how they work conceptually. I just don't understand then what I should be focusing on gaining from the lectures now.

Friday, September 26, 2014

Sept. 30 Required Readings: Metadata and Content Management

1. Intro to Metadata, Pathways to Digital Information
     This reading discusses how libraries create metadata (data about data) for indexes, abstracts, bibliographic records, and really any document or data in their collection to provide access to it. But in the digital age, it's not just librarians/information professionals who are creating metadata about an object. Really anyone can create metadata by saying what the content, context, and/or structure of the object, data, information, etc is and user created metadata is gaining momentum online.
     Metadata is governed by a "community-fostered" to ensure quality and consistency but there is no consistent metadata standard that is interdisciplinary enough to be adequate for all collections/materials in all fields. Theres also an issue of managing and maintaining metadata, but algorithms can at times ease this difficulty.
    All-in-all this reading reinforced my knowledge of how metadata certifies authenticity, establishes context of content, identifies relationships between other information, provides access points, and much more to create, organize and describe information. The reading kept mentioning folksonomies, which I have no idea what that is.

2. Dublin Core Data Model
     The reading describes a data model that attempts to create international and interdisciplinary metadata. This RDF and Dublin Core initiative will identify common cross-domain qualifiers through internationalization, modularization, element identification, semantic refinement, identification of coding schemes, and identification of structured compound values.
     It's called RDF-W3C's resource description framework and it looks a lot like HTML meets a MARC record. I think it would be incredibly helpful and almost revolutionary if it worked and took over metadata standards. But it seems like one of those things that sound great in theory- but could never truly work in reality.

3. Using Mendeley
     I really like the authors approach and writing style for this reading. He seemed very honest and real about how Mendeley worked and made me consider more of his opinions had he been a more boring writer. I actually download Zotero after reading the article (I already have and use Mendeley).
     But, he basically explained Mendeley's key features and their pros and cons. His point about the social networking aspect is really relevant, because the feature really is only as good as the people in your field that are using it. The feature does not add any value if no one is using it. But, the feature does make Mendeley a strong competitor for scholarly collaboration.
     Other cool features include the recommendation feature that's based off of the papers you view and share. How Mendeley will organize your documents and cite for you. It's also free to use (expect more storage cost money).

Muddiest Point from Sept. 23 Class

I''m still pretty confused about relational databases and how the whole primary key thing works.

Friday, September 19, 2014

Sept. 23 Required Readings: Database Technologies and Applications

1. Database

  • Database management systems (DBM)- software apps that capture and analyze data and allows the definition, creation, querying, update, and administration of the database. It is also responsible for maintaing the integrity and security of the stored data and for recovering infer if the system fails.
  • A database is created to operate large quantities of information by inputting, storing, retrieving and managing information.
  • Database design and modeling: produce a conceptual data model that reflects the information to be put in the the database, many use the entity-relationship model to do this, and that will translate into a schema that implements the relevant information into a logical database design (the most popular is the relational model represented by SQL language- this model uses a methodical approach which is normalization.
  • Databases can be classified by contents, application area, and by technical aspect.

     This reading was actually the most helpful and easy to understand out of all of the readings. I think it did the best job of explaining what normalization and a what a relationship model is. There were some parts that went over my head, but I think I picked up enough of the basics to understand the main concept of databases. I thought the History section was interesting and also hard to follow but I got the most out of the Design and Modeling section.

2. Entity Relationship Model in Database

  • The ER model is a data model used for describing the data aspects of a business domain- it is a systematic way of describing and defining a business process with components linked by relationships.
  • The database organizes and shows entities and the relationships that exist between them and that database.
  • Entity- noun, capable of independent existence
  • Relationship- verb, captures how entities are related to one another
  • There are different levels of entity relationship models, these include: conceptual data model, logical data model, physical data model
     This reading was very dense and required the reader to have a lot of background information to understand some of the concepts. Even after reading the Database wiki site, I struggled to understand some of the terms and explanations. The author didn't give enough information for me to understand things like cardinality constraints and what a sound definition of attributes would be. I also believe one of the challenges was that I wasn't sure what in the article I needed to be understand more than the rest. I know it all pertains to class but I didn't know what I was supposed to understand at the end. Knowing that could've helped me weed through the very dense reading.

3. Database Normalization Process
     I think this article could have been much more helpful if it was in its original format. Trying to follow explanations that required images that were missing was very challenging. I could tell that a good deal of terms and definitions were trying to be explained to the reader in a simple way, but without the images I really gained nothing from the reading other than a definition of normalization and atomicity. But who know, it seems hard for me to grasp a lot of the tech concepts- so maybe even with the images I would have been just as lost.

  • Normalization- natural way of perceiving relationships between data
  • Atomicity- the indivisibility of an attribute into similar parts
  • Primary key- the uniques identifier required of each row

Muddiest Point from Sept. 16 Class

There wasn't too much from the lecture or assignments that I am confused on- so my muddiest point isn't going to relate directly to the class but to the readings that we were assigned. It sucks that links get broken, but I am hoping that we can go over more of what the third reading was trying to explain about database normalization. I got lost without the pictures and the layout of text made it confusing to follow the narrative.

Thursday, September 11, 2014

Sept. 16 Required Readings: Multimedia Representation and Storage

Data Compression. http://en.wikipedia.org/wiki/Data_compression 

  • Data compression is encoding information using fewer bits than what's in the original representation to reduce the size of the data file
  • Lossless- removing statistical redundancy
  • Lossy- unnecessary information is removed
  • Ultimately: data compression saves space but does not always save time- and many times the quality of the data is reduced
     This article gave me the basic understanding of what data compression is. I now understand what is happening with a ZIP file. It also shed some light of compression of different kinds of data like audio and video. It seems that one needs to really consider what to compress and what not to compress if quality is a matter of importance to the particular data in question.


Data compression basics (long documents, but covers all basics and beyond): http://dvd-hq.info/data_compression_1.php  

     The above article reaffirmed the issue of choosing between quality and saving space. And a lot of it went over my head, especially when it started explaining different sort of algorithms for different kinds of data. What I did understand was really interesting. The main points that I took away include:

  • Data compression allows users to store more in the same space and allows them to transfer data in less time or using less bandwidth
  • Lossless compression recovers information identical to the original data (before compression)
  • Lossy does not recover identical data because bits are removed
  • Some compression does not always make the data smaller (the example when using RLE) so consideration needs to take place on how to compress different data
  • There are different algorithms to save space for different data sequences, but quality can be lost in all of them


Edward A. Galloway, “Imaging Pittsburgh: Creating a shared gateway to digital image collections of the Pittsburgh region” First Monday 9:5 2004 http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1141/1061 

     "Imaging Pittsburgh" was a really interesting account of how the University created online access to 20 different photographic collections spread across the University Archives Service Center, Carnegie Museum of Art, and the Historical Society of Western Pa. The University of Pittsburgh's Digital Research Library was awarded the National Leadership from IMLS to fund this project and what a lot of the account focused on and what I found most interesting was challenges to orchestrating the project. A lot of it seemed to stem from conflict of different organizational between the institutions- especially when it came setting a universal metadata scheme for the project when every institution had their own metadata scheme. It was interesting how the leaders of the project found a way to work with the different institutions and create a cohesive plan of execution.


Paula L. Webb, YouTube and libraries: It could be a beautiful relationship C&RL News, June 2007 Vol. 68, No. 6 http://crln.acrl.org/content/68/6/354.full.pdf 

     This article was a super cool read about how to use YouTube to the advantage of libraries. I think it's a cool idea because YouTube is free for the library to use and free for the patron to access. I liked how the article focused on college libraries and how to get students more familiar with the library and find out where the library is. YouTube is perfect for students because it is a very familiar platform and would probably be preferred over Libguides. It's also great because students can still have access to the videos after they graduate. It could also be a lot of fun for the librarians and staff to create videos and if they do it right, it could make the library a more efficient and familiar tool at patrons disposal.

Tuesday, September 9, 2014

Muddiest Point from Sept. 9 Class

I was a little bit confused about the digitization portion of Assignment 1- Are we to start with ten separate images/objects to digitize but only submit two images for grading through our flickr accounts?

Friday, September 5, 2014

Muddiest Point from Sept 2 Class

I was a little unclear if I was to create a blog entry on the required readings for week one and week two. Or if I was to skip week 1. I know it was discussed, but so much was covered that I couldn't trust my memory. Luckily an announcement was posted on course web for clarification. The announcement also clarified that I am to generate a muddiest point question, which I don't remember that being discussed in class. But all is well now.

Thursday, September 4, 2014

Sept. 9 Required Readings: Computer Basics, Digitization


  1. 1)  Vaughan, J. (2005). Lied Library @ four years: technology never stands
    still. Library Hi Tech, 23(1), 34-49.

         This article was a case study of the technology changes and up-keep in an academic library. I was impressed by how efficient the library and staff were at planning and executing swapping out every computer in what seemed to be a very busy library. But what really caught my attention was that 10% of computer time was used by the community- which I think it's great to allow the community access to the library resources but, in the case study, it became a problem for students when computer terminals would fill up.
         I wonder how the community members could use the computers? Wouldn't access require a log-in ID and password? And wouldn't such a log-in only be given to students and faculty. I understand that space and funds are limited- but if the library is going to allow community members access, then shouldn't there be an effort to provide enough computers so that users are not asked to sign off? If community members can use the computers then they should not be the first to be asked to sign off (as the case study explained). I don't think the library should allow access to "everyone" if "everyone" cannot be equally accommodated.

    2)  Doreen Carvajal. European libraries face problems in digitalizing. New York Times. October 28, 2007
    http://www.nytimes.com/2007/10/28/technology/28iht-LIBRARY29.1.8079170.html 

       
         This was an interesting article that explained the ongoing attempts of many European countries combined and individual European libraries and museum to some-what follow in Googles footsteps by digitizing collections. The goal is to preserve the cultural heritage without  violating any copyright laws (as Google did). The governments of these countries have put a lot of money towards this project, but more is needed to accomplish the goal. These libraries and museums are now looking to private companies, like Google, to help fund the digitization project.
       When the goal is to preserve the heritage of an entire culture, who is responsible to pay the large amount of money to make it happen? Is digitization really the same thing as preservation? Should private companies be allowed to pay in to try to make a profit off a culture?

    3)  A Few Thoughts on the Google Books Library Project http://www.educause.edu/ero/article/few-thoughts-google-books- library-project

        I found the authors argument about how Google does not make books obsolete, but helps to preserve books. And of course Google is giving a second life to many books that have been out of print or would have otherwise been very difficult to gain access to. Google can make very strong arguments about all the good the Google Books project is doing for people, culture, and books. But is digitization really preserving anything? Technologies are always changing. What if Google goes out of business? All the books could be lost. And if someone finds away to ensure permanent online access and preservation to the Google books, are they really preserved when Google cannot properly catalog or index the books in an intelligent manner?