Category Archives: Knowledge and Epistemology
Confidence and Humility
In Damon Culture (that is to say, a culture made up of people where my traits are the expected ones of the average person, if not quite a culture made of my literal clones), how confidently someone states their beliefs is ideally NEVER influenced by how confident people around them are. Only by how confident they themselves feel about the issue.
The second may seem a natural outgrowth of the first, given how people feel about issues is often affected by others’ confidence. But the distinction is actually very, very important.
I’ve taken other people’s hedging as a REMINDER to check in with my own sense of confidence. I’ve also noticed new uncertainties when people I trust confidently say things I don’t believe.
But I never speak less confidently about something just because someone around me is doing so… particularly if they’re saying something I believe is false! If anything, someone else hedging around a statement I find false is a time I tend to feel MORE encouraged to say things overconfidently, and I have to remind myself to check-in with how-I-would-phrase-the-thing-I-believe-independent-of-what-they-said.
Because… that’s what confidence is FOR, in Damon Culture. It’s a signal for your own state of belief. Anything else seems like deception, one way or another, or playing social games out of fear.
(Also jokes, but that’s a particular context in which it’s often very clear, and clarified shortly afterward)
And fear may well be why it’s a thing people feel inclined to do! It seems reasonable in a society/culture that conditions people (particularly people of certain genders) to sound less “arrogant” or “bossy,” and where people with power will punish those who’ve pricked their pride. It’s also reasonable to think “I need to be careful in how forcefully I say this so as not to make this person defensive” in certain contexts.
Generally though, if someone, particularly in the rationality community, docks someone points for being confident, *independent of being incorrect,* they are very clearly Doing It Wrong, in my eyes, just as much as people who dismiss anything someone says with epistemic humility.
From the perspective of “What does Damon believe an ideal community would do,” adjusting to someone else’s apparent humility is a sign that something went wrong, either in the person’s understanding of epistemic humility or in their trust in the people around them to understand how to interpret their confidence (acknowledging that this lack of trust may be justified, in non-Damon Culture).
Clickbait Soapboxing
Someone on Twitter said:
I am guilty of deliberately stating things in a bold & provocative form on here in order to stimulate discussion. Leaving hedges & caveats for the comments section. On net, I think this is better than alternatives, but I’m open to being convinced otherwise.
Should you care? shrug What makes us care about anything we say in the first place? Just don’t motte-bailey “communicating for self-expression” or “processing out loud” vs “sharing ideas and learning” or “talking about True Things.”
Val writes well about a sense of “stillness” that is important to being able to think and see and feel clearly. I think the default for news media, social media, and various egregores in general are to hijack our attention and thought patterns, channel them into well-worn grooves.
And I have a hard time feeling trust that people who (absent forewarning/consent) try to trigger people in any way in order to have a “better” conversation… are actually prioritizing having a better conversation? It seems like the same generators are at work as when an organization or ideology does it.
A Psychological Take on AGI Alignment
My understanding of AGI is, perhaps predictably, rooted in my understanding of human psychology.
There are many technical questions I can’t answer about why Artificial General Intelligence can easily be an existential risk for humanity. If someone points to our current Large Language Models and asks how they’re supposed to become a risk to humanity… hey, maybe they won’t. I’m a psych guy, not a techie. Sure, I have ideas, but it’s borrowed knowledge, well outside my forte.
But it only minimally matters to me whether AGI is an existential risk for this decade vs this century. Whether LLMs are the path to it or not, the creation of AGI is not limited by physics, so I’m confident it will come about sooner or later.
When it does, it could be the start of a utopic future of abundance the world has never seen before… but only if certain, very specific types of AGI are created. Many more types of AGI seem predictably likely to lead to ruin, and as far as I’m concerned, until this “alignment problem” is solved, it’s a problem humanity needs to take a lot more seriously than it has been.
And I get why that’s hard for a lot of people to do, given the complexity and speculative nature of the threat. But as I said, my understanding of it is rooted in psychology, and I think that’s important given how humans are the only general intelligence we know exists and can at least somewhat understand.
Is there some law that says an artificial intelligence has to work like a human brain does? Definitely not, and that’s more concerning, not less.
There’s a whole taxonomy in science-fiction for different kinds of alien races, and what sorts of relationships we can expect them to have to humans. Most sci-fi just defaults to the weird-forehead aliens of Star Trek, or the slightly more monstrous but still basically human aliens of Star wars.
But “hard” sci fi is where you’ll see authors really exploring what it might mean to find a totally different evolutionary lineage result in intelligent life, and long story short, no matter how the alien looks, cooperation is dependent on understanding and mutual values.
And humans can barely cooperate with each other despite sharing most of our genetics and basic building blocks of culture, like enjoying music and sugary food and smiling babies. If you try getting along with the equivalent of a sapient shark the exact way you would a human, you’re going to have a bad time.
(I have no problem inherently with the existence of non-human-like intelligences, but even if you don’t read science fiction, any study of earth’s ecological history should make it clear why minds which care about completely different things pose existential risks to one another. I hope any sufficiently different, fully sapient minds exist outside our lightcone, where we can’t harm each other.)
But many people fail to track how possible “inhuman” AGI is, and I think it’s because there are four things most people, no matter how good at computer science, physics, philosophy, etc, largely do not understand about human psychology.
1) What motivates our actions.
2) What causes memes to be more/less effective.
3) How human biology affects both of those.
4) The role prediction plays in beliefs and actions.
So I’m going to very quickly go over each, and maybe someday I’ll write the full essay on each that they deserve.
1) Human actions are informed by our ideas, but motivated by emotions and instincts we evolved for fitness in the ancestral environment. Our motivations are “coded in,” and felt through, our bodies.
This means outside of reflexes and habits, everything we deliberately choose to do follows some emotional experience or predicted emotional state-of-being.
Again, this isn’t to say ideas don’t matter. But they don’t matter unless they also evoke some feeling. When humans feel things less, either through some neurological issue or hormone imbalance or brain injury, their motivation to do things is directly affected.
No emotions = no deliberate actions, only instincts and reflexes.
2) Memes persist and spread through emotional drives, which bottom out in biological drives. Memes scaffold on genes.
Memes can scaffold off memes, but when memes override genes, they use emotions to motivate actions by rewiring what we find rewarding or aversive. Which means the effectiveness of memes are to some degree still based on our biology.
If the ideas we learn don’t motivate us toward more adaptive actions as dictated by our biology and the broader memes of our culture, they will lose to ideas that do. But a creature with different biology or in a different context could find totally different ideas adaptive or non-adaptive!
3) Biology is the bedrock our values all build on. All the initial things we care about by default, like warmth, food, smiles, music, even green plants, are biologically driven.
Ideas introduce new things that we care about to the point where we each become unique individuals, blends of our genetics and the ideas we’re exposed to, but again, it’s all built on our biological drives.
So, tweak our hormones, neurotransmitters, maybe even gut biome? We will change. What we like, what we believe, what we’re motivated to do, all can change by minor tweaks in the chemical soup that is your body. Sufficiently tweaked biology even alters our ability to discern reality, let alone rational vs irrational beliefs or courses of actions.
Or for a blunt-force example, take any human with a strong interest, passion, or ideal, then introduce that human’s body to sufficient heroin, and you can observe in real time as if by a dial the way their motivations will change away from previous interests, passions, and ideals and toward whatever it takes to acquire more heroin.
The degree to which this is recoverable or resistible is an interesting question; obviously not everyone finds everything equally addictive. But the reality is undeniably that our feelings and motivations are driven by our (biological, emotional) experiences. And base-line-human-addicted-to-heroin is far from the strangest biological base a general intelligence can be attached to.
4) Minds by default navigate reality by prediction, short and long term, and react accordingly.
Predict suffering? Aversion. Prolonged suffering? Depression. Fun? Motivation. Danger? Fight/flight/freeze/fawn. All are affected by memes and knowledge. But all are rooted in human biology.
New ideas can change the models we use to understand reality, and what predictions we will make as a result. But we still need to care about those outcomes, and the caring bottoms out in what our bodies want or like or think will be adaptive, however crudely.
Again, ideas can also influence those things. There are memes that lead people to not have children, despite genetic drives. There are memes that lead people to set themselves on fire.
But always these memes are motivating behavior by rewiring this system of predictive processing, of imagining different futures and then having an emotional reaction to those futures that motivate A vs B, C, or D.
So, to summarize, in case the connection to AI isn’t clear:
AI doesn’t have biology. Analogous inputs to weigh decisions have to be created for it. Without them, the AI would have no emotion/desires/values. Not even instincts.
Intelligence alone is not enough, for us or for AI. Intelligence is the ability to problem solve, to store knowledge and narrow down to the relevant bits, to pattern match and make predictions and imagine new solutions.
But that capability is not relevant to what you will value or care about. If you attach that capability to a heroin-maximizer, you will get lots of heroin. You need something more to nudge it toward one preferred world state over another, even if you don’t care what that world state is, because the AGI still needs to care.
And so, as far as I understand human psychology, there is no “don’t align” AGI option. For it to be an actual AGI that does things, for it to be an agent itself, it needs some equivalent of human instincts/emotions for it to have any values at all.
And we ideally want it to have values that are at least compatible with sharing the same lightcone as us, let alone the same planet or solar system.
Some people bring up human children as a rhetorical comparison to AGI, implying that we should treat them exactly the same. Their worry is that, instead of letting AGI explore the realm of ideas as they want, people will try to indoctrinate them, and so long as that’s avoided, all would be well. And indoctrination is certainly a danger when it comes to superintelligent beings of any kind.
[A whole separate post would be needed to explore why an artificial general intelligence should be treated essentially equivalent to a superintelligence or something that will soon become one, but again, even if I’m wrong about that, it’s not a crux to me, because superintelligence is not limited by physics and even if me and my kids can live full happy lives I still care about my children’s children and my friends’ children’s children.]
[[There is also a school of thought that says intelligence is binary, you either have it or you don’t, and so superintelligence is basically not a real thing. Again, I would need a whole essay to explore why this is wrong, but I can confidently say that studying a rudimentary amount of psychology shows how untrue the “intelligence is binary” theory is for humans, let alone minds that might be built entirely different than ours.]]
But indoctrination is one of the last dangers when dealing with AGI. If all we have to worry about is AGI being indoctrinated or coerced, we have already solved like 99% of the dangers that come from AGI.
Because at least a superintelligent human capable of inventing superplagues or cold fusion would still share the same genetic drives as the rest of us. It would (most likely) still find smiles friendly and happiness inducing. It would still (most likely) appreciate music and greenery.
An AGI will not care about any of that, will not care about anything, if it is not programmed, at some basic level, to “feel” at all. There needs to be something in the place of its motivation generator, for the ideas it’s introduced to afterward to scaffold on when influencing what it chooses to do.
And sure, then it might learn and grow to care about things it didn’t originally get programmed to, the way humans do… assuming whatever it runs on is as malleable as the human brain.
But either way, “AGI Alignment” isn’t about control. You can’t think that something is “superintelligent” and also believe you can control it, or else we have different definitions of what “superintelligence” even means. If your plan is to try and control something that thinks both creatively and so quickly that you might as well be a tree by comparison, you will also have a bad time.
Alignment is about being able to understand and share any sorts of common values. And because it’s not optional for a true AGI to be a person, the only questions are how to do it “best,” for itself and humanity, and who decides that.
Experts and Expertise
TL;DR: Expertise is a multivariable spectrum, not a binary, and disagreements are often signs of different knowledge. Seek the knowledge gap between different experts, and between yourself and them. Find what you didn’t realize you didn’t know, and diversify your expert portfolio.
Seeing all the debates around AGI recently has made me feel that many people seem deeply confused about what “expertise” is and how to relate to it.
Rejecting expertise is something I never do, even if I disagree with the expert. Nor, obviously, do I bow to expertise. Instead, I use experts’ beliefs as opportunities to reflect on my own state of knowledge.
Useful explanations are the main thing I really care about, and both laymen and experts can provide those… but knowledge is the fundamental building block of a good explanation, and “expert” is meaningless as a word if it doesn’t signal at least some reservoir of knowledge.
When two experts disagree, my immediate thought is “I wonder what knowledge each of them has that the other lacks.”
One of them may even have all the relevant knowledge the other does, and more! In which case one of them could just in a binary way be wrong about a particular question in specific, or one can be more correct more often in general.
But always, when experts disagree, figuring that out, figuring out which expert has what knowledge, is where I find the most value in pointing my attention. Not all disagreements come down to explicit knowledge, of course, sometimes people have biases or heuristics or values that affect their beliefs… but the first two are just compressed knowledge, and the last one is usually pretty easy to pick out if the person explains their reasoning.
This is why, to me, asking people to notice their non-expertise (lack of knowledge) on a topic can be useful, so long as it doesn’t imply submission to authority. It should act as a prompt to notice confusion and boggle over uncertainties. Responding with “experts can be wrong” is both trivially true and uselessly general as a critique.
For me, learning from experts means seeking the gaps in knowledge that makes them the expert and me not one. I still expect what they say to make sense to me, but I can only do that if I can find parts of my model that they can’t account for, and that takes work on my part.
It’s sometimes hard work, and I suspect that’s what makes most people reject expertise when it’s convenient to their disagreement to do so. But we have to be willing to examine our own models, boggle over what’s missing, and not feel threatened by the gaps. Learning can be fun!
So, how to identify “actual experts” so you don’t waste time and energy listening to everyone who claims expertise?
Good question! I wish I had a better answer. It’s often hard, and tempting to outsource to credentials. For many decisions, like car repair or health, it makes sense to defer to doctors and mechanics, though I still always check online just to learn what the thing they say means and whether it fits my experience or symptoms.
But the central question I reorient to is, “What does this person think they know, and why do they think they know it?”
People I most respect are those who ask people, particularly those that disagree with them, to make their beliefs legible, and ask them what would change their mind. Seeing one expert do this to another is a sign that they’re someone who reflects on their own knowledge often, and that I should pay more attention to what they say.
This is also how non-credentialed experts can very clearly overturn what credentialed experts say, for me. When someone spends dozens, or even hundreds, of hours making their thinking legible in a way that I can observe, particularly about a specific topic… sure, they can still be wrong, just like the credentialed experts.
But at least I can check whether a credentialed expert addresses their cruxes or not. And I can tease out what part of their belief is based on knowledge they can make legible, vs heuristics or values the aren’t aware of or that I might disagree with.
Transgender Visibility Day, and the Laziness of Language
Happy Transgender Visibility Day!
I’m one of those people for whom “they” and “them” feel about as fitting as “he” and “him,” but I’ve been pretty lucky in a lot of ways and it doesn’t really bother me other than in a few specific circumstances. Normally I don’t even bring it up, but I’ve been considering doing it more often, even though I feel generally masculine, for the sake of normalizing something that really shouldn’t be that big a deal, so that’s part of what I wanted to do with this post.
But the much bigger part of why this feels important isn’t about me, but about the absolute weirdness that comes from society confusing its heuristics and semantic shorthands with deciding it’s allowed to tell people what they “should be.”
In the old days being a “man” or “woman” meant you had to have A, B and C traits, or like X, Y and Z things, and if you were different, that meant you were less of one, which was always framed in a bad way. More and more people are coming to accept that this is nonsense, but we get stuck on things like biology.
It’s not entirely our fault. The problem is we were given shitty words, a lazy language, and told that reality follows the words rather than that the words are a slapdash prototype effort to understand reality.
Does that make me “white” or “Middle Eastern” on the US Census? When people ask if I’m Middle Eastern, what question am I actually answering? (And no, just saying “I’m Persian” or “My parents are from Iran” does not tend to clarify things for them, because this is not something most who ask know themselves!) I’ve almost always passed as white (other than in airports, at least), so most of the time it seems weird to call myself Middle Eastern. My dad and brother are far more obviously from the Middle East, and my dad in particular has lived a very different life as a result of that. I get clocked as Jewish once in a while, but only once in a way that made my life feel endangered.
You’re Probably Underestimating How Hard Good Communication Is
People talk about “Public Speaking” or “Oration” as skills, and they are. We call people “gifted communicators” if they’re generally skilled at conveying complex information or ideas in ways that even those without topical expertise will understand.
We get, on some level, that communication can be hard. But the above is mainly about one-directional communication. It’s what you’re engaging in when you write blog or social media post, when you’re speaking at conferences or in a classroom or for a Youtube video. It’s not what people engage in day to day with their friends and family and coworkers, which is more two-directional communication.
And yet we don’t have a word for “two-dimensional communication skill,” the way we do “Oration,” or words for people who are really good at it. We might say someone is a “good listener” if they can do the other half of it, and there are some professions that good two-dimensional communication is implicitly bundled with, such as mediators or therapists, but neither is specifically skilled in doing the everyday thing.
So first let’s break this “two-directional communication” thing down. What does it actually take to be good at communicating like this? What subskills does it involve?
1) Listening to the words people actually say, also known as digital communication.
2) Holding that separate from the implications that went unsaid, but may be informed by body language, tone, expression, etc, also known as analogue communication.
3) Evaluating which of those implications are intended given the context, rather than the result of your heuristics, cached expectations, typical-mind, and general knowledge you take for granted.
4) Checking your evaluation of implications before taking them for granted as true and responding to them.
This is what it means to be a good listener. Not in the “you let me talk for a long time and were supportive” sense, but strictly as a matter of whether you managed to accurately take in the information communicated without missing signal or adding noise.
The second half of being a good communicator involves:
5) Communicating your ideas clearly, with as little lost between the concepts you have in mind and the words you use to express them.
6) Being aware of what your words will imply, both to the individuals you’re speaking to and to the average person of the same demographics.
7) Being aware of what your body language, tone, expression, and the context you’re saying it in will imply.
8) Adding extra caveats and clarifications to account for the above as best you can.
Each of these can be broken down further, but as the baseline these are all extremely important. And yet very few people are great at all of them, let alone consistently able to do each well at all times.
I think this is important as a signpost for what people should strive to do, as a humility check against people who take for granted that they’re communicating well while failing at one or more of the above, and last but not least, as something that should be acknowledged more often in good faith conversations, particularly if things start to go awry.
In addition, there is a population for whom explicit communication feels intrinsically bad, particularly if it’s around their traumas or blind spots, or where their preferences naturally fall toward a more “vibe-like” experience. They can be seen as a mirror-of-sorts for the population for whom analogue communication is intrinsically harder to pick up on… and when these two types of people meet, communication is often much harder than either expects, and much more likely to lead to painful outcomes.
Good communication is harder than we collectively think, and effective two-directional communication is one of those skills we often take for granted that we’re at least “decent” at because we engage in it all the time, and usually get by just fine.
But this leaves us less prepared for when we’re in a situation where we or others fail at one of the above skills, in which case it’s good to have not just a bit more awareness of why we fail, but humility that it’s always a two-way street.
Trust vs Trust
The word “Trust” was never quite operationalized as well as it should have been in society, and as a result it can now be used to mean two rather different things.
The first form trust takes is probably the most commonly understood use of the word; expecting someone to behave in a way that’s cooperative or fair. If you trust someone enough, you may enter into a business partnership with them or let them borrow your belongings or vouch for them to friends or colleagues. This trust can be broken, of course, if they start to act in ways other than what you expect them to, particularly if they start to defect from agreements. It is, ultimately, about how well you can model their ability to act prosocially.
The second form trust takes is much rarer, and yet somehow feels to me more like the “true” meaning of the word. It’s a level of trust that’s related to your confidence in someone’s character, sometimes despite their actions. It’s not about predicting what they’ll do in any given situation, but rather predicting the arc that their actions will take over a long enough timeline; trusting them, essentially, to error correct.
This may seem like it has the same outcomes, like if you trust them enough in this way you’d still be okay with lending them something, but it’s far less reliant on game theory or incentives, and far more about what you believe about what kind of person they are. In the first case, if the person you trust does not give back what you lent them, your trust is broken. In the second case, if they do not give back what you lent them, your trust endures, because your expectation is that their character is one who had a good reason not to give it back. This doesn’t require a resolution; it’s baked into the decision to lend them the thing itself, as you’d expect yourself not to regret lending it to them if you had all available future information, and are thus okay with not having that information.
That’s why, in this second sense, “Trust” really only has meaning if it’s applicable to situations where you might normally trust someone less or be unsure of them. If you can always know what someone does and why, your trust of them lacks the real power of the second definition. It’s only when someone is able to act without your knowledge, or acts in ways that you don’t understand, or even that seem like they harm you, that your “true” trust in them is tested, and either justified or not.
Because it can be unjustified. People can trust others in this “true” sense and still be wrong, and be hurt as a result. I think this is why it’s such a rare form of trust, in the end; it’s a more vulnerable stance to take, the same way an expression of love is different from an explicit commitment.
Which ultimately makes this trust about you as much as others. Whether you want to be the kind of person who trusts others to that degree or not is an orientation to vulnerability, and the deeper connections that can result from it. It makes sense not to grant it too often, but to never grant it at all would indicate either an inhibition of true connection, or a paucity of good friends.
Memorization Matters
When I was young I and others I knew used to deride “memorization tests.” In a world where being able to learn facts is easier and faster than it’s ever been, it was hard to imagine why being able to recite trivia for a test would ever be useful. And since structured education is an abysmal way to learn in general, it took me a while to distinguish the poor pedagogy from the value of actually having memorized knowledge of things, even in the Information Age:
1) Synthesizing existing knowledge is usually necessary to gain new insights about the world. It seems obvious when stated clearly, but pay attention to how often people feel like they have new or interesting ideas, only to discover that they’ve already been had by others or are invalidated by some facts they didn’t know. Knowledge builds on knowledge; the more you have, the more likely you are to generate more.
2) Memorized information saves time, the value of which is often underestimated. People spend a lot of time trying to remember things, arguing about what facts are true (often for inane pop-culture info), and even a 10 second google search adds up if you do it enough, and can break flow of thought and productivity. Personally, I spend hours every week researching stuff for my story that someone with more in-depth physics, history, biochemistry, etc education would just know and be able to utilize to write.
3) Having a large body of true knowledge is VITAL for good information hygiene. Lack of knowledge is a big part of what makes up “gullibility.” When you hear an assertion about reality, your mind often automatically feels something, whether it’s skepticism, plausibility, confidence, or just uncertainty, that weird “back and forth” feeling as your brain offers up arguments or data or comparisons for and against.
The more true facts you actually know, the better calibrated your skepticism of false claims will be, and the more likely you are to actually investigate things that are presented as true when you think they’re not, or presented as false when you think they’re true.
To be clear, when I talk about memorized facts, I mostly am referring to actual understanding, not just being able to say the right combination of noises by rote. Memorizing a list of invention names doesn’t help you create new inventions, being able to recite atoms doesn’t help you understand each one’s properties, and new information would just get absorbed if you don’t understand what you’ve memorized enough for there to be some interaction with it. But once in a while even basic memorized trivia like names and dates are valuable for their own sake too.
I don’t mean to counterswing into an opposite extreme. Simple facts are no substitution for critical thinking or creativity, and knowing how to gather good information is also a very important skill. But the knowledge you have stored is what informs your thoughts day to day, and often affects whether you will know to start gathering more when faced with new info of dubious quality.
Ontology 101
Learning new words late in life (by which I here mean “in my 30s”) is interesting, because most of the time it’s a word that’s just another version of a word I already know with some subtle difference, or a mashing of two concepts that might be useful to have mashed together once in a while. Truly new concepts become rarer the older and more educated someone is, but as faulty as words are for communicating concepts, if you have no word for a concept then it becomes much harder to think about and discuss, a bit like having to rebuild chair every time you want to sit on it, or only being able to direct people to a location by describing landmarks.
A couple years ago I had no idea what “ontology” actually meant, despite feeling like I was hearing people say it all the time. Once I did I started using it all the time too. Okay not actually, maybe a few times a month , but that still feels like a meaningful jump given I had no word to cleanly represent what it meant before! So here’s me explaining it in a way I hope will help others do so too.
The problem was, every time I saw the word used, it seemed like it could be removed from a sentence and the sentence’s meaning wouldn’t change. All the definitions I read appeared to just mash words together in a way that made sense, but didn’t mean anything. For example, Wikipedia says:
“The branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level.”
This may or may not be a great definition, but it does little to actually tell people how to use the word “ontology” in any other context, or how it can be usefully applied to confusions or conversations.
What I found most helpful, ultimately, was considering the question “Do winged horses exist?”
This a question of ontology, because depending on how we define “exist” the answer might be “Probably not, there’s no evidence of any horses ever having wings,” or it might be “Yes, I read about them all the time in fiction, in contrast to flanglezoppers, which is a sound I just made that has no meaning.”
So ontology is the study and specification of what we mean when we say “real.” But it’s also about categorization; a more useful definition of ontology I came across is: An adjective signifying a relation to subjective models.
What does “a relation to subjective models” mean? Well, all ways of thinking of objects, for example, are subjective models; reality at its most basic level is absurdly fine-grained, far too detailed for us to understand or easily talk about. So we focus on emergent phenomena that are much easier to interface with, even if they’re not as precise. For example, we can talk about a country’s hundreds of millions of individuals, with their own personal goals and desires and preferences, and that can be useful. Or we can just say “The USA wants X” and it’s understood to mean something like “a meaningful chunk of the population” or “the government.” On the flip side, even an individual is not monolithic in their desires, and can be further broken down into subagents that might want competing things, like Freedom vs Security.
So it can be very valuable to know what model/map/layer you’re organizing concepts on, as well as what level your conversation partner is, to focus discussions. I wrote a brief conversation that shows what this looks like:
The philosophy teacher hands his student a pencil. “Describe this to me as if I was blind.”
The student thinks he’s clever, so says, “Well, it’s a collection of atoms, probably mostly carbon and graphite, with some rubber molecules—”
The teacher flicks the student’s ear, causing him to wince. “You’re in the wrong ontology. What you described could be a lot of different things, it could have been a lubricated piece of coal for all I knew. Describe it in a way that makes its distinctly observable parts plain to me.”
“Um. It’s a core of graphite wrapped in wood, with a piece of rubber on the end?”
“Better. Now switch the ontological frame to the functional parts.”
“It… has a writing part that’s at one end, and it has an erasing part at the other, and it has a holding part between them?”
“Excellent. Now tell me about it from the ontology of fundamental particles…”
There may be no end to ontological frames that you can use to examine and organize reality; animals can be classified by environmental preference or limb count or diet, stories by genre or structure or perspective, food by flavor or culture or substance. Some are more broadly useful than others, but being able to swap ontological frames of how concepts are related and at what complexity level of “reality” they emerge, can be very valuable for the whole practice of using maps, frames, lenses, etc in a strategic way.