The advent of computers and their incredible processing power has meant little to improved comprehension of prose-based text, especially in terms of syntax, semantics, tone, structure, and so on.
It certainly has improved the way those texts can be delivered, liked, curated, shared, and displayed, but improved understanding has been mostly limited to the human brain and its ability to form wisdom and expertise.
Recently, however, this trend has shifted as powerful visualization tools have emerged. These tools mine the text for metadata–trends in a writer’s patterns and style that can be picked up and displayed in ways that can help uncover subtle trends and even ideas that we might otherwise miss.
And while these are still the domain of “text nerds” and peripheral academic discussions, here’s hoping it won’t be long until computers can help us better understand the nuances of what we read.
The following presentation by gramener.com explores some of the ways text can currently be disaggregated, from the now familiar word clouds, all the way to some very complex-looking–and less traditional–text visualizations.
As huge stores of data are archived–and analyzed–the potential is literally unlimited, especially as human readers better understand the tools and how they might be used, and we collectively refine the software itself that makes it all happen.