July 18th - DANS

I read a statistic that the total amount of data in the world doubles every two years, which is such an astronomical rate and amount that is impossible to wrap your head around. However, having all of this data means nothing if there is not a way to store all of it and process it into useful information. While this topic interests me somewhat, I would rather be the one gathering the data and processing some—the archiving and dissemination of data is something that I am not concerned about. With that being said, the lectures and DANS were intriguing to listen to. I found Herbert Van de Sompel’s presentation of link rot and content drift to be especially interesting, while the solution seemed to be just delaying the inevitable. Creating an intermediary link so that hyperlinks can stay up to date is a smart idea, but I feel that it moves the inherent problem from the researchers writing the hyperlinks in the first place to institutions that try to update the links. If those institutions fail that would mean there would be nobody to update these links and the problem still persists. I do not really see a way to permanently fix the issue of link rot unless artificial intelligence gets to a point where it can automatically update these links or these institutions become permanent somehow. However, I think that this solution is the best there is now. While this is not my cup of tea, I am glad that there are people out there who want to fix these issues. And while at DANS these people are mainly focused on the academic applications of these technologies, I am sure that these innovations will trickle down to the consumer level, allowing for a smoother web experience.