This is the fourth of a six part series on enabling content discovery and combatting information overload. In this post we briefly compare semantic technology and human cognition. We will also quickly look at the role of activity streams. Harvard historian Niall Ferguson wrote, “It is the unforeseen that causes the greatest disturbance, not the expected.” One of the skills that people have over computers is knowing where to look next and to quickly see anomalies. If you dumb down a task you will likely take away the person’s ability to see the unexpected.
External algorithms and machine driven intelligence rely on rules and predetermined taxonomies that can hide the unexpected. People-centric tools can enhance our natural, and perhaps evolutionary, cognitive abilities to notice the unexpected. Here is a graphic that contrasts human and machine intelligence.
Finally, in addition to masking the unexpected and requiring considerable training by human handlers, semantic tools can be complex to use. For example, Sandeep Raut recently provided some useful guidelines for Implementing and Using Social Media Analytics. Sandeep offers what he refers to as “typical steps i implementing social media analytics.”
One: Collect the huge amount of unstructured data – comments, blogs, call center notes, twits from social sites
Two: Using statistical analysis & Natural Language processing (NLP) on texts and words to break up the information into good or bad
Three: Use categorization, classification & association methods for text processing
Four: Further identify the categories on which these good or bad sentiment are applicable from the data
Five: Produce the results using visualization tools”
Why not skip the three middle stages and focus on the last step so people can use their cognitive powers and expertise to do a better job of deciding what is relevant to their unique needs? This is our proposed solution.
In summary much of today’s information systems operate under an old school management framework and require many pre-determiners such as semantic algorithms and taxonomy builder/assignment. Furthermore, they are limited to push-based activities and deterministic discoveries according to known keywords or processes. A visual and temporal correlation of emerging themes, that transcend these tools’ respective data architecture, would deliver an organic and persistent awareness experience.
Before we go into our solution, we want to acknowledge that the new social software platforms are starting to enable the emergence of more user generated unstructured data within the enterprise. However, there are limits to what current approaches have achieved and we see the Awareness Engine as a natural complement to these new tools.
For example, an important interaction might be seen for a few minutes in a micro-blogging thread (increasingly referred to as an activity stream) but then be pushed away by new events. As we noted the linear stream has its limits in search and in activity streams it can become a firehose that is impossible to drink from. We need new visualizations that correspond more closely to how the human mind makes associations to better support that associative process, as well as allow for the recognition of anomalies in patterns of interactions. It is the anomalies that count and lead to innovation. This is why we are introducing the concept of awareness through better visualizations to complement activity streams and other social software capabilities.