The Data-Information-Knowledge-Wisdom (“DIKW”) model is a useful for examining how well an organization is doing in deriving value from its unstructured content.DIKW model - data information knowledge wisdom

In his book, Too Big to Know,* David Weinberger credits Russell Ackoff, a leading organizational theorist, with making a pyramid-shaped depiction of the DIKW model in a 1988 address to the International Society for General Systems Research. It represents the idea that data becomes more valuable as it is successively filtered, winnowed, and structured. As stated by Weinberger:

“Starting from mere ones and zeroes, up through what they stand for, what they mean, what sense they make, and what insight they provide, each layer acquires value from the one below it.”

In other words, data points have more value when users know what they have and can work with them and make decisions based on them.  When “wisdom” is achieved, organizations are able to use the data for strategic business decisions, e.g., to evaluate and absorb records of acquired companies, to automate the auditing and compliance functions, or to evaluate loan portfolios more quickly and accurately.

The definitions for each of the parts of the DIKW model are somewhat imprecise and overlapping and it can be more useful to think of the model as a continuum rather than discrete phases. Here are some points to consider in gauging where your organization is on the DIKW continuum for each of its major collections:


To use the framework of an earlier blog, “The Four Key Dimensions of Purpose-Driven Data Quality,” the further content is moved upward on the DIKW pyramid, the more accurate it is, the more fit it is for given purposes, the less effort it takes to use it, with proportionally more value than the  incremental cost of further refinement.

BeyondRecognition’s data governance technology provides the tools to move unstructured content from data to wisdom:

  • Visual classification clusters all files based on visual similarity, even non-textual image files. This permits the classification of those files and the elimination of unwanted document types.
  • The high value variables for each document type are identified and extracted and the whole file can be content enabled by converting graphical elements to textual values.
  • Single-instance editing provides reliable data that can be further normalized either internally within a collection or based on data values from other systems of record.
  • The process is scalable, enabling near real-time connection with other systems of record, either to input values to those systems or to audit and validate the data they contain.
  • Data values can be exported as delimited text values or exported directly into advanced analytics systems.

For more information, read some of our recent blog posts:

To receive your copy of Guide to Managing Unstructured Content, go to:


* Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room, Basic Books; First Trade Paper Edition, Jan. 2014; available on Amazon at

Comments are closed.