Category Archives: Knowledge and Epistemology

Experts and Expertise

TL;DR: Expertise is a multivariable spectrum, not a binary, and disagreements are often signs of different knowledge. Seek the knowledge gap between different experts, and between yourself and them. Find what you didn’t realize you didn’t know, and diversify your expert portfolio.

Seeing all the debates around AGI recently has made me feel that many people seem deeply confused about what “expertise” is and how to relate to it.

Rejecting expertise is something I never do, even if I disagree with the expert. Nor, obviously, do I bow to expertise. Instead, I use experts’ beliefs as opportunities to reflect on my own state of knowledge.

Useful explanations are the main thing I really care about, and both laymen and experts can provide those… but knowledge is the fundamental building block of a good explanation, and “expert” is meaningless as a word if it doesn’t signal at least some reservoir of knowledge.

When two experts disagree, my immediate thought is “I wonder what knowledge each of them has that the other lacks.”

One of them may even have all the relevant knowledge the other does, and more! In which case one of them could just in a binary way be wrong about a particular question in specific, or one can be more correct more often in general.

But always, when experts disagree, figuring that out, figuring out which expert has what knowledge, is where I find the most value in pointing my attention. Not all disagreements come down to explicit knowledge, of course, sometimes people have biases or heuristics or values that affect their beliefs… but the first two are just compressed knowledge, and the last one is usually pretty easy to pick out if the person explains their reasoning.

This is why, to me, asking people to notice their non-expertise (lack of knowledge) on a topic can be useful, so long as it doesn’t imply submission to authority. It should act as a prompt to notice confusion and boggle over uncertainties. Responding with “experts can be wrong” is both trivially true and uselessly general as a critique.

For me, learning from experts means seeking the gaps in knowledge that makes them the expert and me not one. I still expect what they say to make sense to me, but I can only do that if I can find parts of my model that they can’t account for, and that takes work on my part.

It’s sometimes hard work, and I suspect that’s what makes most people reject expertise when it’s convenient to their disagreement to do so. But we have to be willing to examine our own models, boggle over what’s missing, and not feel threatened by the gaps. Learning can be fun!

So, how to identify “actual experts” so you don’t waste time and energy listening to everyone who claims expertise?

Good question! I wish I had a better answer. It’s often hard, and tempting to outsource to credentials. For many decisions, like car repair or health, it makes sense to defer to doctors and mechanics, though I still always check online just to learn what the thing they say means and whether it fits my experience or symptoms.

But the central question I reorient to is, “What does this person think they know, and why do they think they know it?”

People I most respect are those who ask people, particularly those that disagree with them, to make their beliefs legible, and ask them what would change their mind. Seeing one expert do this to another is a sign that they’re someone who reflects on their own knowledge often, and that I should pay more attention to what they say.

This is also how non-credentialed experts can very clearly overturn what credentialed experts say, for me. When someone spends dozens, or even hundreds, of hours making their thinking legible in a way that I can observe, particularly about a specific topic… sure, they can still be wrong, just like the credentialed experts.

But at least I can check whether a credentialed expert addresses their cruxes or not. And I can tease out what part of their belief is based on knowledge they can make legible, vs heuristics or values the aren’t aware of or that I might disagree with.

Transgender Visibility Day, and the Laziness of Language

Happy Transgender Visibility Day!

I’m one of those people for whom “they” and “them” feel about as fitting as “he” and “him,” but I’ve been pretty lucky in a lot of ways and it doesn’t really bother me other than in a few specific circumstances. Normally I don’t even bring it up, but I’ve been considering doing it more often, even though I feel generally masculine, for the sake of normalizing something that really shouldn’t be that big a deal, so that’s part of what I wanted to do with this post.

But the much bigger part of why this feels important isn’t about me, but about the absolute weirdness that comes from society confusing its heuristics and semantic shorthands with deciding it’s allowed to tell people what they “should be.

Because that’s what this debate always comes down to. The labels society developed are all terrible ways to actually map reality, and while many people, and some parts of Western Society, have begun evolving past a lot of the baggage those labels inherited… there’s still a long way to go, and gender is just the latest frontier of this.

In the old days being a “man” or “woman” meant you had to have A, B and C traits, or like X, Y and Z things, and if you were different, that meant you were less of one, which was always framed in a bad way. More and more people are coming to accept that this is nonsense, but we get stuck on things like biology.

It’s not entirely our fault. The problem is we were given shitty words, a lazy language, and told that reality follows the words rather than that the words are a slapdash prototype effort to understand reality.

We had to develop words like “stepmom” to differentiate “biological mom” and “non-biological mom,” except THAT doesn’t work all the time either, because stepmom implies that they married your dad, so what do you call the female that helped raise you that didn’t marry your dad? We all just shrug and accept this gap in our map because no one bothered to create a differentiating word for “person who carried you in their womb whose genetics you share” and “person who is female who raised you.” Too much of an edge-case, maybe, or the only people it affected were poor, or it wasn’t something polite company would acknowledge because the “proper” thing to do would be to cement the relationship through marriage.
Bottom line is it’s a bad language. It’s lazy. It carries baggage and artifacts. It imprecisely describes reality. And we should always keep that in mind, ALWAYS, when we disagree with people about basically anything, but PARTICULARLY when we disagree about each other.
Ethnicity is like this too. There are some useful medical facts that can be determined through heredity and genetic trends in populations, but for 99% of circumstances, the question of what “race” someone is ends up being entirely about social constructs. It’s about how they’re treated by others, it’s about their experiences and lack of experiences, and people fall through the cracks of our shitty, lazy language all the time.
23&Me says I’m 96.4% “Iranian, Caucasian & Mesopotamian”:

Does that make me “white” or “middle-eastern” on the US Census? When people ask if I’m Middle-Eastern, what question am I actually answering? (And no, just saying “I’m Persian” or “My parents are from Iran” does not tend to clarify things for them, because this is not something most who ask know themselves!) I’ve always passed as white (other than in airports, at least), so most of the time it seems weird to call myself Middle-Eastern, though my dad and brother are far more obviously from the Middle East, and my dad in particular has lived a very different life as a result of that. I get clocked as Jewish once in a while, but only once in a way that made my life feel endangered.

The point is there’s nothing at the heart of the generally asked question “what ethnicity” I am. Knowing my parents are Iranian  would tell you some things about the kinds of food I enjoy and am used to, but not exclusively. I was raised Jewish, and that would again indicate some things about food familiarity and what holidays I’m familiar with. But when it comes to who I am, as a person, the pattern of thoughts and behaviors that make up me, it’s a nonsense question that, in a perfect world, I wouldn’t even have to consider. As with gender, I’m lucky enough that on most days I don’t have to, unless I’m filling out a form of some kind.
Back to gender. Because we were raised in a culture too lazy and biased to come up with words for “XY chromosomes” that means something different from “male presenting” and another word for “identifies with this bundle of cultural-specific gender stereotypes” and so on, we waste hours and hours, millions of collective hours, we waste blood and sweat and tears, on stupid debates about whether people should be called “men” or “women,” and the question of whether those should be the only two options takes the backseat, while the question of how much it actually matters compared to how we treat each other is talked around or ignored.
There are SOME non-stupid questions in that space. There are some non-stupid considerations that have to be navigated once in a while in society where something similar to the concept of “gender” or “sex” is important, particularly in medical contexts, dating contexts, physical competitions, etc.
But these are 1 in 100, 1 in 1,000, probably really 1 in 1,000,000 what people actually care about when you examine society’s insistence on how lazy we can collectively get away with being when thinking and talking about each other, and certainly don’t have any relationship to the various hysterias that lawmakers tend to leverage when deciding which bouts of cultural fears or ignorance are most politically expedient to them.
In my ideal world we all have pills we can take to transform in to any body shape we want anyway, or a menu in a simulation that lets us be anything we want, and anything that takes us even a tiny step in that direction is better than things that keep us stuck. Which means I’m always happy to call other people whatever personal-identity-labels they’d prefer to be called, even if I slip up sometimes due to pattern-matching visual gendertropes, or accessing cached memories of a person.
As for myself, over the course of my life I’ve responded to “Damon,” “נתן,” “Max,” and “Daystar,” and I honestly don’t really have a preference with what you call me; just how you treat me.

You’re Probably Underestimating How Hard Good Communication Is

People talk about “Public Speaking” or “Oration” as skills, and they are. We call people “gifted communicators” if they’re generally skilled at conveying complex information or ideas in ways that even those without topical expertise will understand. 

We get, on some level, that communication can be hard. But the above is mainly about one-directional communication. It’s what you’re engaging in when you write blog or social media post, when you’re speaking at conferences or in a classroom or for a Youtube video. It’s not what people engage in day to day with their friends and family and coworkers, which is more two-directional communication.

And yet we don’t have a word for “two-dimensional communication skill,” the way we do “Oration,” or words for people who are really good at it. We might say someone is a “good listener” if they can do the other half of it, and there are some professions that good two-dimensional communication is implicitly bundled with, such as mediators or therapists, but neither is specifically skilled in doing the everyday thing.

So first let’s break this “two-directional communication” thing down. What does it actually take to be good at communicating like this? What subskills does it involve? 

1) Listening to the words people actually say, also known as digital communication.

2) Holding that separate from the implications that went unsaid, but may be informed by body language, tone, expression, etc, also known as analogue communication.

3) Evaluating which of those implications are intended given the context, rather than the result of your heuristics, cached expectations, typical-mind, and general knowledge you take for granted.

4) Checking your evaluation of implications before taking them for granted as true and responding to them.

This is what it means to be a good listener. Not in the “you let me talk for a long time and were supportive” sense, but strictly as a matter of whether you managed to accurately take in the information communicated without missing signal or adding noise.

The second half of being a good communicator involves:

5) Communicating your ideas clearly, with as little lost between the concepts you have in mind and the words you use to express them.

6) Being aware of what your words will imply, both to the individuals you’re speaking to and to the average person of the same demographics.

7) Being aware of what your body language, tone, expression, and the context you’re saying it in will imply. 

8) Adding extra caveats and clarifications  to account for the above as best you can.

Each of these can be broken down further, but as the baseline these are all extremely important. And yet very few people are great at all of them, let alone consistently able to do each well at all times.

I think this is important as a signpost for what people should strive to do, as a humility check against people who take for granted that they’re communicating well while failing at one or more of the above, and last but not least, as something that should be acknowledged more often in good faith conversations, particularly if things start to go awry.

In addition, there is a population for whom explicit communication feels intrinsically bad, particularly if it’s around their traumas or blind spots, or where their preferences naturally fall toward a more “vibe-like” experience. They can be seen as a mirror-of-sorts for the population for whom analogue communication is intrinsically harder to pick up on… and when these two types of people meet, communication is often much harder than either expects, and much more likely to lead to painful outcomes.

Good communication is harder than we collectively think, and effective two-directional communication is one of those skills we often take for granted that we’re at least “decent” at because we engage in it all the time, and usually get by just fine.

But this leaves us less prepared for when we’re in a situation where we or others fail at one of the above skills, in which case it’s good to have not just a bit more awareness of why we fail, but humility that it’s always a two-way street.

Trust vs Trust

The word “Trust” was never quite operationalized as well as it should have been in society, and as a result it can now be used to mean two rather different things.

The first form trust takes is probably the most commonly understood use of the word; expecting someone to behave in a way that’s cooperative or fair. If you trust someone enough, you may enter into a business partnership with them or let them borrow your belongings or vouch for them to friends or colleagues. This trust can be broken, of course, if they start to act in ways other than what you expect them to, particularly if they start to defect from agreements. It is, ultimately, about how well you can model their ability to act prosocially.

The second form trust takes is much rarer, and yet somehow feels to me more like the “true” meaning of the word. It’s a level of trust that’s related to your confidence in someone’s character, sometimes despite their actions. It’s not about predicting what they’ll do in any given situation, but rather predicting the arc that their actions will take over a long enough timeline; trusting them, essentially, to error correct.

This may seem like it has the same outcomes, like if you trust them enough in this way you’d still be okay with lending them something, but it’s far less reliant on game theory or incentives, and far more about what you believe about what kind of person they are. In the first case, if the person you trust does not give back what you lent them, your trust is broken. In the second case, if they do not give back what you lent them, your trust endures, because your expectation is that their character is one who had a good reason not to give it back. This doesn’t require a resolution; it’s baked into the decision to lend them the thing itself, as you’d expect yourself not to regret lending it to them if you had all available future information, and are thus okay with not having that information.

That’s why, in this second sense, “Trust” really only has meaning if it’s applicable to situations where you might normally trust someone less or be unsure of them. If you can always know what someone does and why, your trust of them lacks the real power of the second definition. It’s only when someone is able to act without your knowledge, or acts in ways that you don’t understand, or even that seem like they harm you, that your “true” trust in them is tested, and either justified or not.

Because it can be unjustified. People can trust others in this “true” sense and still be wrong, and be hurt as a result. I think this is why it’s such a rare form of trust, in the end; it’s a more vulnerable stance to take, the same way an expression of love is different from an explicit commitment.

Which ultimately makes this trust about you as much as others. Whether you want to be the kind of person who trusts others to that degree or not is an orientation to vulnerability, and the deeper connections that can result from it. It makes sense not to grant it too often, but to never grant it at all would indicate either an inhibition of true connection, or a paucity of good friends.

Memorization Matters

When I was young I and others I knew used to deride “memorization tests.” In a world where being able to learn facts is easier and faster than it’s ever been, it was hard to imagine why being able to recite trivia for a test would ever be useful. And since structured education is an abysmal way to learn in general, it took me a while to distinguish the poor pedagogy from the value of actually having memorized knowledge of things, even in the Information Age:

1) Synthesizing existing knowledge is usually necessary to gain new insights about the world. It seems obvious when stated clearly, but pay attention to how often people feel like they have new or interesting ideas, only to discover that they’ve already been had by others or are invalidated by some facts they didn’t know. Knowledge builds on knowledge; the more you have, the more likely you are to generate more.

2) Memorized information saves time, the value of which is often underestimated. People spend a lot of time trying to remember things, arguing about what facts are true (often for inane pop-culture info), and even a 10 second google search adds up if you do it enough, and can break flow of thought and productivity. Personally, I spend hours every week researching stuff for my story that someone with more in-depth physics, history, biochemistry, etc education would just know and be able to utilize to write.

3) Having a large body of true knowledge is VITAL for good information hygiene. Lack of knowledge is a big part of what makes up “gullibility.” When you hear an assertion about reality, your mind often automatically feels something, whether it’s skepticism, plausibility, confidence, or just uncertainty, that weird “back and forth” feeling as your brain offers up arguments or data or comparisons for and against.

The more true facts you actually know, the better calibrated your skepticism of false claims will be, and the more likely you are to actually investigate things that are presented as true when you think they’re not, or presented as false when you think they’re true.

To be clear, when I talk about memorized facts, I mostly am referring to actual understanding, not just being able to say the right combination of noises by rote. Memorizing a list of invention names doesn’t help you create new inventions, being able to recite atoms doesn’t help you understand each one’s properties, and new information would just get absorbed if you don’t understand what you’ve memorized enough for there to be some interaction with it. But once in a while even basic memorized trivia like names and dates are valuable for their own sake too.

I don’t mean to counterswing into an opposite extreme. Simple facts are no substitution for critical thinking or creativity, and knowing how to gather good information is also a very important skill. But the knowledge you have stored is what informs your thoughts day to day, and often affects whether you will know to start gathering more when faced with new info of dubious quality.

Ontology 101

Learning new words late in life (by which I here mean “in my 30s”) is interesting, because most of the time it’s a word that’s just another version of a word I already know with some subtle difference, or a mashing of two concepts that might be useful to have mashed together once in a while. Truly new concepts become rarer the older and more educated someone is, but as faulty as words are for communicating concepts, if you have no word for a concept then it becomes much harder to think about and discuss, a bit like having to rebuild chair every time you want to sit on it, or only being able to direct people to a location by describing landmarks.

A couple years ago I had no idea what “ontology” actually meant, despite feeling like I was hearing people say it all the time. Once I did I started using it all the time too. Okay not actually, maybe a few times a month , but that still feels like a meaningful jump given I had no word to cleanly represent what it meant before! So here’s me explaining it in a way I hope will help others do so too.

The problem was, every time I saw the word used, it seemed like it could be removed from a sentence and the sentence’s meaning wouldn’t change. All the definitions I read appeared to just mash words together in a way that made sense, but didn’t mean anything. For example, Wikipedia says:

“The branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level.”

This may or may not be a great definition, but it does little to actually tell people how to use the word “ontology” in any other context, or how it can be usefully applied to confusions or conversations.

What I found most helpful, ultimately, was considering the question “Do winged horses exist?”

This a question of ontology, because depending on how we define “exist” the answer might be “Probably not, there’s no evidence of any horses ever having wings,” or it might be “Yes, I read about them all the time in fiction, in contrast to flanglezoppers, which is a sound I just made that has no meaning.”

So ontology is the study and specification of what we mean when we say “real.” But it’s also about categorization; a more useful definition of ontology I came across is: An adjective signifying a relation to subjective models.

What does “a relation to subjective models” mean? Well, all ways of thinking of objects, for example, are subjective models; reality at its most basic level is absurdly fine-grained, far too detailed for us to understand or easily talk about. So we focus on emergent phenomena that are much easier to interface with, even if they’re not as precise. For example, we can talk about a country’s hundreds of millions of individuals, with their own personal goals and desires and preferences, and that can be useful. Or we can just say “The USA wants X” and it’s understood to mean something like “a meaningful chunk of the population” or “the government.” On the flip side, even an individual is not monolithic in their desires, and can be further broken down into subagents that might want competing things, like Freedom vs Security.

So it can be very valuable to know what model/map/layer you’re organizing concepts on, as well as what level your conversation partner is, to focus discussions. I wrote a brief conversation that shows what this looks like:

The philosophy teacher hands his student a pencil. “Describe this to me as if I was blind.”

The student thinks he’s clever, so says, “Well, it’s a collection of atoms, probably mostly carbon and graphite, with some rubber molecules—”

The teacher flicks the student’s ear, causing him to wince. “You’re in the wrong ontology. What you described could be a lot of different things, it could have been a lubricated piece of coal for all I knew. Describe it in a way that makes its distinctly observable parts plain to me.”

“Um. It’s a core of graphite wrapped in wood, with a piece of rubber on the end?”

“Better. Now switch the ontological frame to the functional parts.”

“It… has a writing part that’s at one end, and it has an erasing part at the other, and it has a holding part between them?”

“Excellent. Now tell me about it from the ontology of fundamental particles…”

There may be no end to ontological frames that you can use to examine and organize reality; animals can be classified by environmental preference or limb count or diet, stories by genre or structure or perspective, food by flavor or culture or substance.  Some are more broadly useful than others, but being able to swap ontological frames of how concepts are related and at what complexity level of “reality” they emerge, can be very valuable for the whole practice of using maps, frames, lenses, etc in a strategic way.