I noticed recently that books with the phrase “The Last Man Who Knew Everything” all share in common that their subjects lived during the period close to the Scientific Revolution, roughly between 1550 to 1700. (The examples I own are about Athanasius Kircher, a Jesuit priest born in 1602; Thomas Young, who studied topics such as optics and philology and was born in 1773; and Philadelphia area professor Joseph Leidy, who was born in 1823.)
It’s as if the Scientific Revolution – and the knowledge it spawned – killed the ability to Know Everything. Before then, it was not only possible to be a generalist or polymath (someone with a wide range of expertise) – but the weaving together of different disciplines was actually rather unexceptional. The Ancients discussed topics such as ethics, biology, and metaphysics alongside each other. The Babylonian Talmud discusses everything from astronomy and biology to morality and law, weaving them together into a single compendium.
So what changed? Scientific knowledge exploded in size, mainly due to the application of the scientific method to our surroundings. As that knowledge base and its domain experts grew exponentially, we began classifying and ordering all that we understood – from the classification taxonomy of Carl Linnaeus to manuals for categorizing mental disease. We made sense of our world by dividing information into manageable portions and distinct areas of proficiency.
But as people began to specialize, knowledge became fragmented. We chose to know more and more about less and less. We may have expanded what we as a society know – but it was at the price of no single individual being able to truly know it all.
Now, we obviously require specialized experts (as opposed to dilettantes) to solve specific problems; think about the field of medicine, for example. Yet the most exciting inventions occur at the boundaries of disciplines, among those who can bring different ideas from different fields together. As Robert Twigger noted, “Invention fights specialisation at every turn.”
In fact, some of the most exciting advancements in computing right now come from the field of deep learning – which itself draws from multiple fields: neuroscience, cognitive psychology, machine learning, natural language/ linguistics, computer vision, mathematics – to make the next step of AI possible. Companies such as Facebook, Google, IBM, and Microsoft are all involved.
But frankly, this kind of interdisciplinary approach isn’t happening more broadly in corporations, let alone in academia. There are institutional barriers (nearly all training, and data, lives in silos) as well as cognitive and biological ones. Even though the information storage capacity in our brains is vast (multiple petabytes), we eventually bump up against what we can truly understand (what some call The End of Insight) – or we just can’t hold all the relevant knowledge in our heads.
Still, we needn’t despair. There are ways to foster a culture of interdisciplinarity in a fragmented world.
We Need to Focus on the Tools, Not the Fields
Several years ago, a team of scientists examined hundreds of millions of clicks on scientific papers in order to discern the “clickstream” – the path readers take from one page to the next.
This data revealed patterns of how people moved from one subject area to the next. For example, nursing connects medicine to the fields of psychology and education. Organic chemistry bridges physical chemistry and analytic chemistry; economics is tightly intertwined with sociology and law; and the field of music stands quite distinct.
Of course, these are oversimplifications. Music incorporates concepts from physics and psychology while economics draws heavily from mathematics. But it’s one way to explore the interconnected nature of ideas, and it reminds us that we need to identify the tools necessary to bridge different domains and place them into a connected framework.
Let’s take a simple analogy. What do the following things have in common: doing Sudoku, constructing crossword puzzles, conducting logistics for large companies, playing Super Mario Brothers?
Well, in content terms, not much. They appear to be a collection of tasks that are easy to understand but not master. And it turns out that they’re all hard in a specific way: They’re what are known in theoretical computer science as NP-complete problems. Knowing this means each of these problems can be converted into a version of the other – I can construct a Sudoku puzzle that, if solved, could potentially shed light on how Walmart should route its delivery trucks.