So social media is more humane.
I often turn my attention to the subtle relations between information addiction and user interfaces. Here I’ll cite a term I like the most to represent this information addiction/anxiety. Infornography, selon Wikipedia, defines an addiction to or an obsession with acquiring, manipulating, and sharing information, as in modern society information is being considered not just a valuable commodity (from a practical point of view) but something that generates an almost sexual thrill. Information in a sense can facilitate the development of an alternate world of “escapism” (see, for example, the Japanese hikikomori phenomenon). Another (relatively rare) term I like to use in this context is fragmentia, somewhere defined as a cognitive disorder where one feels cut from a sense of wholeness because of common exposure to only incomplete parts of things and ideas. Both terms represent a state of mind that’s almost always a symptom/consequence of an underlying mental condition such as obsessive-compulsive disorder — and here the Japanese, rather sensibly, identify the core problem of their (social) phenomenon as social withdrawal, while western researchers appear to insist on the logical impossibility of an “internet addiction disorder”.
My perspective on user interfaces is that the desktop metaphor is just too cumbersome and slow after a certain critical level of understanding is reached by the user. Graphical User Interfaces (GUIs) doesn’t scale well with the level of alternative items that characterizes the information spaces of the Internet. There’s a highly complex bit of technology for solving the problem of getting data from here to there. Instead of capturing data via pipes and applying any other filtering or whatever, we introduce a whole new level of complexity and depend upon some GUI to display it. But we have different standards, things are incompatible and you’re limited to what the designer intended. OK, there’s an ocean of plugins, yet that just bloats out GUIs even worse. I quickly get bored with a theme and spend 3/4 hour looking for a new template for my WordPress-powered blog. We devote a lot of the visible screen to graphically represent commands (potential actions) that are often never used, in a complete language-agnostic way.
The natural alternative are text-based (or command line) interfaces: they can be much more usable (in a sense that you have a certain amount of “control”), but in its old-fashioned/geeky version they are almost always much less learnable. The old-fashioned command line is dumb — you not only have to know all the relevant commands by heart but you also have to spell them correctly (with all their myriad options).
There has to be a better way to design interfaces to make the whole computing thing more usable and accessible. A powerful enough text parser that one can type in conversational language and have it decipher what your intentions are, also allowing communication with the computer in a more natural way.
I’m writing about not get distracted by the interface itself (or the aesthetics in general) and about effective ways to do with information overload. I have experienced a deep internal conflict of this sort, to the point I even became a de facto text-mode guerrilla some years ago, the kind that uses a text-based browser. There’s now some important people elsewhere with a clear conscience that the command line has the potential to be one of the most powerful tools to handle both problems, assuming it’s not bogged down by requiring syntax and inscrutable commands (see Aza Raskin, who challenges the whole application-centric computing model). So there’s still a prevalent anti-desktop, content-focused, minimalist view (minimalism as voluntary restriction of stimuli, that dissuades dissipation and councils concentration, so that you don’t think about using the computer but just about completing the task at hand). In fact, search engines are just migrating to answer engines — as I don’t want search results, I want to know something –, controlled through a modern form of command line interface. These modern command languages are tolerant of variations, robust and exhibit slight touches of natural language flexibility: word order that’s not critical, use of synonyms or even related terms, spelling accuracy not required.
(And I’m not claiming here any kind of open-source philosophy: nobody cares that the software behind GoodReads is proprietary as long as they can get on their FriendFeed. It’s sufficient if we believe the “Web 2.0″ movement (in its most abstract form) shows the potential gains of creatively experiment with alternative institutional environments and governance structures.)
So social media is unequivocally more humane than traditional media, as it’s also more critic/participation-inducive. I’m thinking that despite the desired social media literacy it’s also important that the tools itself be designed more in the direction of helping even the literate (social media) user to filter the noise and to focus content over style.
Am I even making sense of things?