My name is Don, and I have a problem. I’m trying to make sense of my world. Sometime ago I asked myself the question, “What is the earliest indicator that something is going wrong?” And of course, I’m not happy with just a single problem, I’m looking for the answer for the base problem class, which all other problems inherit. What would you expect from an ENTP?

A Fistful of Models

Along the way, I’ve accumulated several models and tools. So far I have the following explicit models and tools:

  • the Cybernetic Model
  • Diagrams of Effect
  • Behavior Over Time Graphs
  • the five system’s attributes (openness, purposefulness, multidimensional, emergence, counter intuitiveness)
  • systems archetypes
  • the Satir Interaction Model
  • the Satir Transformation Models
  • MBTI (and temperaments)
  • abstracting (Korzybski)
  • abstraction (Hayakawa) [for a comparision check out this article]
  • the NLP Meta-Model
  • logical levels
  • meta-programs
  • intake modalities
  • the Rule of Three
  • and a decision making model from Ackoff.

At least these are the models I thought of. I’m sure I have others.

Everyone Does It

We all model our world. I discussed some aspects of modeling in 2003 when I wrote Choosing Change. This results in your “world” being different from my “world.” But if the “worlds” overlap enough (and in general most models have sufficient overlap), most of the time the differences don’t create problems.

Is the Model Accurate?

In Tools of Critical Thinking: Metathoughts for Psychology David Levy discusses the “Reification Error.”

To reify is to invent a concept (or “construct”), give it a name, and then convince ourselves that such a thing exists in the world.

These constructs stand in comparison to concrete things. For example, a brain is a thing, the mind is a construct. If you’re not sure if something is a thing or a construct, use the wheelbarrow test. If you can put the “noun” in a wheelbarrow, it’s a thing, otherwise it’s a construct.

Constructs cannot be proven accurate. They exist at a higher logical level than things. Moving up the abstraction ladder (you did read the article didn’t you?) changes the question we need to ask. The real question becomes … Is this construct/model useful? In Getting to Language I chained a “logical level” model with the Satir Interaction Model. It may or may not be accurate, but I find the resulting model useful for trying to understand what’s happening when I’m having a conversation with someone.

Got a favorite model you’d like to share? Drop me a note.