Skip to content

No Moore, Please

February 10, 2010

Commenting on my last post, Craig astutely observed the increasingly telescopic nature of history and evolution in a cultural or social sense, and that significant changes than can be written off as mere fluctuations or minor deviations seem to be occurring more and more rapidly.  That point should have been the focus of that argument, as I agree with it to a greater extent than some of the other points made in that post.

At the same time, I have often silently wondered whether we’ve past the last significant inflection point in the evolution of technology now that society has had enough time to cope with the fact that information has “gone digital,” reducible to a stream of bits arranged in alternating “on” and “off” positions capable of being endlessly stored, repeated, and manipulated by automatons.  I’ve stayed silent on this point out of deference to Moore’s Law, not to mention the fact that, like any introductory calculus class would teach you, a fixed perspective at an instantaneous point makes it particularly difficult to measure a trajectory.  At least in the eyes of history, I would have thought the point I stand on was instantaneous, but having grown up with and around technology–perhaps more so than many of my friends and peers–I cannot think of a single other change as revolutionary as the advent of translating information to digital form, rendering it processable by ever-improving hardware.  Sure, hardware will continue to improve, but that only constitutes technology in a very materialistic sense, not in the sense of how the technology helps us interact with the world.  In that sense, the last great leaps forward were the Internet and the cellular telephone (and their latest synthesis: the smart-phone) since they radically changed how information is distributed; we have moved from a world where information was “pushed” at us to a world of all “pull” all of the time.  Arguably, you could say that Google was another great leap forward since it removed the necessity of hierarchical organization of information, and just made the search term relevant, meaning that information is now utterly unfettered by traditional, material organizational constraints.  But again, these are just ways of rendering and distributing the information that is only so manipulable by the fact that it is digital.

Getting back to the telescoping nature of evolution, it seems as though most changes in recent history have played out on the stage of information, unsurprisingly, and plenty of people smarter than myself will tell you how radically the Internet has changed every way we think about any form of thought.  Medical advances, genetic screening and engineering, cultural formations, any sort of intellectual pursuits, all have been radically transformed by the ability to share information instantly and at zero cost.  However rosy a picture that paints in your mind, some have feared this change as ultimately damaging to the biological organisms that were once the custodians of this corpus of information (i.e., our collective brains).  Rote information retention is less and less relevant–absent a nuclear holocaust destroying all functional computers–when information can be retrieved only slightly more slowly, but with arguably greater accuracy, by accessing an instantly available reference source such as Google, Wikipedia, and so on.  Others would argue that searching, at least, increases brain function.  The two conclusions are not at odds; we may simply be exerting energy that is ultimately directed at decisions which do not result in greater knowledge retention.

If I may be so bold as to suggest a reconciliation, here’s my take boiled down to a pithy one-liner:

The Internet isn’t making us dumber; we’re just storing our knowledge in the cloud.

As with any cloud storage option, the efficiency of the system will depend on the speed and reliability of the interconnection between the end point and the cloud.  The data managed by a cloud is likely to reach utmost reliability and accuracy given that large numbers of people will independently verify or contradict any information that can be reduced to either “true” or “false” (at least for functional purposes).  However, the dispersed organization of the information increases the possibility of error or sabotage in the transmission.  After all, Google is subject to “optimization” (read: manipulation), Wikipedia can be defaced before a moderator catches and corrects the edit, and human error and bias will persist, at minimal levels, even after large numbers of independent and concurring sources seem to verify some bit of information.

Therefore, the last remaining concern is somewhat obfuscated by educators who fear that we no longer seek out, retain and refine large amounts of data, and instead scan the digital repositories for information and conclusions we presume from the start.  The problem that is glossed over by such an unnecessarily worrisome approach is that we are not concentrating on instilling the skills of good research from the start, and still drill in the notion that raw historical information has to be assimilated to be effective.  I am no expert on pedagogy, and will not claim that I know how to best make people sift through data for accuracy, but I suspect that there is too much per se distrust of online sources in educators, who are only encouraging plagiarism and shoddy research by turning it into a black market for lazy students.  Instead, educators should fully embrace the use of the Internet for any and all student work, and all the access to information it has to offer, and concentrate on instructing students on the art of sifting, assessment, and general bullshit-detection.  Not only would this discourage cheating, but it would result in students with better skills that are more closely applicable to real life conditions (think of those math tests where you couldn’t use a calculator, as though you might not have access to a calculator at some point in real life).  If we work to put the more ineffable and holistic tools of information processing and discernment that individual organisms (again, read: brains of students) would find relevant today, then humans won’t be going obsolete any time soon, Moore’s Law or not.

Advertisement
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: