On Dumbing Down

I’ve been considering the simplification of language. There is an interesting angle of self-censorship culture-enforced newspeak, but that is not where I’m going with this. I am thinking more about each person’s individual inner dialogue. If how we speak influences how we think, then dumbing down language dumbs down our thought. This is a problem because language is a system designed to exchange information with other people, and the more complicated you make your language, the less likely they are to understand you. 

There is certainly value in communicating in a manner that is easy to understand; it serves as a reality-check. If your words are completely incomprehensible/vacuous to even the smartest people you share them with, then you will have to reckon with whether you just sound silly when slapped with objections, in which case you’re probably up-your-own-ass and deluded. But – and this is a big but – I think it’s also ok for your internal dialogue to not be understandable by most people. Because, as much as there is value in being understood, there is equal or greater value in having integratively complex thought process, that is, thoughts that depend on many different interconnected models, priors, and levels of abstraction, and are therefore sprawling and irreducible. 

Just how words are used to help you find out what other people think, words also allow you to find out what you yourself think; for communicating with yourself, as much as for communicating with others. And you should not rely on the dumbed-down language that you use to simplify your ideas to randos for your own thinking. I will now give an example. 

First with some context. I didn’t live up to my promise of creating a series of blog posts about the “great stagnation,” a thesis supported by smart people like Peter Thiel, Patrick Collison, and Tyler Cohen, but I can give a very brief explanation. The thesis is that scientific and technological progress has been slowing, and therefore, economic growth is stagnating and possibly artificially inflated. Assuming, for a minute, that the great stagnation hypothesis is true, there are several semi-competing theories that attempt to explain the reason for it. The most popular explanation is cultural/regulatory, and I am partial to that. But there is another equally popular argument: there are proportionally less scientific advances today then there was in the past because “the low-hanging fruit have been picked.” 

Now, to say, “the low-hanging fruit have been picked” is kind of a lazy and layman analogy-based explanation. It’s not too hard to understand. However, it is a perfectly good summary because, well, it actually does accurately summarizes the argument. The problem is, I noticed a strange change in my mind before and after I heard the argument worded this particular way. Before, I spelled out a complicated analysis of how specialization affects the ability to make breakthroughs, and the accessibility of certain types of information. But after hearing it expressed as “low-hanging fruit,” I just started calling it that. It is concise, which is good. It allows you to “get to the point” quicker. However, by getting to the point quicker, your are not doing the mental work of spreading the details. It is a useful exercise to force yourself to lay things out laboriously. 

Don’t take shortcuts. I recently heard the economist Tyler Cowen explain the great stagnation as a type of last mile problem. I thought this was quite astute, and I’m not sure I thought of it that way before. What’s interesting: that argument is, ostensibly, the same argument as “the low hanging fruit have been picked.” If you whittle it down to its most reductionist form, he’s making a point about diminishing returns curves. I had conceptualized it in terms of diminishing returns, but I was mental-shortcutting with “picking fruit” that I never considered linking it to the last mile problem. 

A little bit of inefficiency is useful. If it’s complicated, the concepts are all in the same pot, bumping around. That is how people make connections. I wonder to what extent analogies are useful. Just now, should I have used that analogy about the pot, or should I have laid things out in more literalist detail? Surely analogies have some use (Scott Adams disagrees by the way). I suppose analogies are useful for the same reason, and in the same way, that hypotheticals are useful. They are a useful tool of thought, of surveying the world, but in the end you should take things for what they are in and of themselves, not as some go-through between yourself and the more familiar object.