The last few decades have seen a revolution in data handling and efficiency. With Google running in the petaflops zone and terabytes becoming a trivial storage value, the world of information has exponentially expanded. Although I admit I was not alive at the time, it was not that long ago that megabytes were considered enormous. Now a megabyte is so tiny that it is likely your average program will use up to fifty of them in memory. People have music libraries that run into the terabytes. It is almost magical how we have grown up so quickly.
The primary reason for me even discussing this obvious trend is Cisco's new router. Now you might say why fuss about any old router, but the new 322 Tbps can process so much information that you could download the Library of Congress in about a second. (And don't ask whether its terabit per second or tebibit per second, the latter would make it even faster.) Also, it is reported the router could theoretically hold up if every single person in China started a video call at once. Now I cannot even begin to imagine how expensive this device is, but it is still representative of the enormous amounts of information our society is supporting in a given time frame.
Another example of increased information handling is MapReduce (and its counterpart Hadoop). Both are systems of processing huge amounts of data in a parallel paradigm in search of efficiency and reliability. In other words, take a lot of data and do something with it fast. Before MapReduce, people centered around the idea of one central server for everything, rather than processing information in parallel. This seemed logical at the time, and still is in most cases, but when your input runs into that of Twitter, let alone Facebook or Google, you need something much more efficient.
Anyway, I figured I would just take a post to step back and look at really how far we have come since the days of floppy disks (raise your hand if you still have a floppy drive in your computer!).