The Future of Knowledge: Part Two
Last week I talked about the interrelationship between data, information and knowledge—and ended, tantalizingly, with the question of what it all had to do with the Connected Battlefield. Well, here’s the answer you’ve been waiting for.
Like all of us, today’s warfighter is drowning in data. We have developed all manner of systems for capturing and delivering data. Petabytes of the stuff flow into storage every day and are doggedly organized into information (largely by computers) and then carefully interpreted by analysts (computers and humans). Often, the volume of data to be evaluated far exceeds the capacity of the information processing system to deliver timely and accurate knowledge.
While we want to know everything there is to know about any situation, we simply can’t know it all. We know some things and we can extrapolate what is likely true. But in every case, the analysts involved are operating on an individual interpretation of available information—a subset of the whole truth. The very purpose of knowledge is to suggest action, and the warfighter can only act based on what he or she knows.
Increased data ≠ increased knowledge
Today, we know much more than ever about any situation, but it can be seen that the increase in data gathered is not matched by an equal increase in our knowledge. I am not saying that, because they have incomplete and insular knowledge, our warfighters’ actions are wrong. I’m saying that the data-information-knowledge process must drive toward improvement—so the character of knowledge is changing. In the future, knowledge will be more complete, less parochial and less subjective.
But how will this be accomplished?
A prerequisite is, of course, high performance computers. The vastness of data we seek to process can only be handled by computers of enormous capability in speed and memory. They can organize data into information and evaluate that information far faster and far deeper than any collection of human analysts. They can do it over a far broader dataset and produce results in near real time. It is the only tool we have to cope with the data storm we are now in.
And then, there are genetic algorithms. Around 1950, John Holland and Arthur Samuel, while at IBM, began to explore the spontaneous emergence of order in the context of computer programs that could learn from their own experience. (The result was a learning program that could play checkers and win against amateur players—pretty good for a computer of that era!)
The resulting principle takes a systematic view of data, evaluating available data against potential reward, then remembering the results and using this experience to improve its analysis. This was the birth of genetic algorithms. Today, these are everywhere. Google applies genetic algorithms to improve searches, stock trading programs trade stocks based on genetic algorithms. Anywhere there are complex data structures, genetic algorithms are used.
So how does this apply to our warfighters?
The modern warfighter is faced with complex challenges. Conflicts are comprised of many moving parts; logistical, political and often economic in addition to tactical and strategic. Often, they are occasioned by seemingly mysterious behaviors and asymmetric threats. Planners in these conflicts must be constantly alert to the unintended consequences of any action they take. These conflicts operate like the weather, and climate change—complex and difficult to predict. We refer to weather and the environment as systems—so it makes sense to treat conflicts as systems, and model them as we do the weather and the environment.
Up until the 20th century, the general school of thought among scientists was that all systems could be modeled using linear, deterministic algorithms; you know where a big change in one variable results in a big change in the system response. It was thought that even weather (which had evaded modeling since—well, forever) could be modeled by deterministic means—if only we had enough data and could correlate it.
During the 1950s and 1960s, Edward Lorenz, an American mathematician and meteorologist, grew wary of the prevailing idea that weather could be modeled this way. While at MIT, he worked with one of the most powerful computers of the day in an attempt to model weather. What he discovered was that most atmospheric phenomena involved in weather forecasting are non-linear and could not be modeled deterministically. He discovered that a minute change in any variable could have a vastly non-linear effect on weather. Lorenz described this revelation as “The Butterfly Effect.”
Today, it is widely accepted that weather and climate (and many other) systems are indeed non-linear (often called “chaotic”) systems. My point is that military situations are non-linear systems too, and there is no deterministic algorithm to predict the success of any action. High performance computers, running genetic, adaptive algorithms will (just as they do now with weather, climate, investment portfolios and internet searches) be the knowledge-base of the future warfighter.
Knowledge in the future will reside in machines. The challenge for us is to design machines we can co-exist with; that can support our goals and improve our lives. At GE’s Intelligent Platforms business, we work daily to provide the tools to build these machines. Research from our five world-class research centers combined with software from our Software Center of Excellence in California and hardware, and software designed and built in our many facilities around the world, are bringing these machines to life.