I am sitting on a beach and watching the ocean. On rare occasions, I can see a fish jump out of the water, not too far away. I imagine all the fish that are under water, just out of sight to me. Some are close together in big swarms, some are alone. And now I imagine being a fisherman, on my tiny little boat, crossing that huge expanse of water. There are definitely lots of fish down there, just under the surface. But in the absence of tech gizmos such as radar, sonar and the like, I don’t know where to find the fish. How can I know where the swarms are? How can I know what to do? If I am not catching any fish, is it because there are no fish in this area, or because there is something wrong with my bait?

Now I get lucky, and I catch some fish – quite a few, actually. Now I not only have fish, I have information. The information that what I just did was successful. But what does this mean, how can I make this information useful for my next decision? I have to find a pattern. What if I stay in the area, will I still catch many fish? What if I come back here tomorrow? Should I switch off the motor of my boat, because it shies away the fish, or should I keep it running, because the sound attracts them?

When we face complex situations, we are often just a little fisherman on the ocean. Objectively, there is a lot of information in the system: where the fish are, how they behave. But we don’t have access to the relevant information. All we see is that there is a lot of water, everywhere. Objectively, we have many many options to choose from: go close to that spot three miles out where the cold deepwater current comes up, or go slightly to the left of it, just over the underwater cliff. But since we only see the surface, it actually does not matter which nuance we choose, we don’t know what the implications of these nuances are anyway. And that leaves us in fact with very few options indeed: stay where we are, go somewhere else, keep moving, somehow. 

To the bureaucratic mind, this feels very uncomfortable.

Only in retrospect do we tend to see everything as a cause to the outcome. When we caught a lot of fish, we conclude that we evidently did make the right decisions. But in truth we have no clue which of these decisions were important, which were of no consequence whatsoever, and which were actually keeping us from an even better result. We still don’t understand the intricate interdependencies that existed that day in the ocean, and even if we did understand what happened on that one day, we have no indication whether this course of events will repeat the next day, or not. We are victims to the teleological fallacy, the tendency to retrospectively see everything as a relevant means to the outcome. 

The conclusion, for the moment, is humility and hope. If I am not catching any fish here, I move somewhere else, acknowledging my ignorance, accepting the possibility that maybe a big swarm was just coming my way, and I am now steering away from it – and I will never know. This acceptance of my ignorance is the basis of that little informed action I can take. I do get rid of the bait that has never worked so far. I do come back to a spot that was relatively successful so far, although I have no idea that it has to do with underwater currents and temperature. When I accept that everything I do is a more or less educated guess, I am much more open to adapting my plan. The options at my disposal actually become few, not many, and I consider them as constantly changing. As the economist John Kay noted in his wonderful book Obliquity, in complex situations even the most powerful men such as Lincoln or Roosevelt must proceed by choosing opportunistically from a narrow range of options.