If a controversial hypothesis typically faces an abundance of moralistic attacks, red herrings, and weak arguments, that is at least some evidence in favor of that hypothesis.
An important observation: "Those who discuss controversial research agendas in the face of intense scrutiny will have disproportionately non-conformist personalities."
Feb 21, 2023·edited Feb 21, 2023Liked by Ives Parr
The best discussions can be had between non-conformists.
Johnny, have you considered debating Dutton and Menie on why embryo selection can have a larger impact than they argue (due to the "ick" factor which IVF also suffered from), because it will be disproportionately used by the impactful smart fraction?
I don't think Menie is a public person anymore. Dutton might talk to Jonathan if he reaches out. I asked Alex Kaschuta to talk to Johnny. She had on JF Gariepy on Subversive who had a pretty pessimistic attitude about genetic enhancement and I wanted some more positive arguments.
Nothing to appreciate - if you think selflessly and without ego, you know it's simply true. If you desire to shape the future in the way we know it must be shaped, making your voice in the mould of Dwarkesh is a good tool for this goal.
Also, as for your substack, I think you can see what topic gets the most hearts and comments. Even a short clip with Dutton would have great metrics, and add a lot of subs to the substack.
The interesting thing here is that this line of thought probably holds true whether or not the controversial idea is actually true. I mean, when looking at a true idea like heritability of personal characteristics, you would expect arguments against it to be weaker than arguments in its favour anyway (even if it were wholly non-controversial and apolitical) simply because it is true. But the discussion above suggests that you might find ideas with good arguments in favour and poorer arguments against even if they are false. I suppose the lesson is to look for the best arguments on each side, rather than the average argument.
I've been musing on this with respect to energy sources. Arguments in favour of fossil fuels appear, on average, to me, to be better thought out than arguments in favour of renewables. I still think the fossil fuel side is basically wrong, possibly down to this dynamic.
It is possible to have a ridiculous hypothesis that is non-defensible nor heuristically plausible; a floating teapot around the Saturn rings are exhibiting pulsating negative energy waves that are changing the momentum of every individual into evil beings.
Most people who have objectional hypotheses do not however reason like this or make claims as such.
There are two demographics of counterfactual narrative claims (in any field);
- we see this with 'washing hands', 'smoking', 'mercury', 'thalidomide', 'male vs female sex dimorphism', 'epidemiological claims' etc
(b) crazy exceptionals, delusionals, people on low-latent inhibition + some disorders or disinclinations, or low IQ + low propensity for mechanistic reasoning
and also (c) money-snake griefters (will say anything for $$ for confirmation bias)
Both of these functional groups must be low on agreeableness.
Both of these groups must be able to ignore the emotional stimuli of ostracization or planted associative (positive) or (negative) stimuli by the communities around them.
The first population will have one or more dispositions
(a) to evaluate the premises carefully
(b) verify the implications of a conclusion
(c) reconciliate sensory evidence with models, compare information, cross-reference for consilience, congruency
The second population has a lower cognitive threshold for filtering information but is more suggestible, or has a tenaciousness for wanting to belong in a counter-narrative group.
The third thing to note is that any repeated signal in the real-world from a finite number of sources that are intrinsically-linked (i.e. funded from same organization) tend to have motives and objectives, whether it be industry or political. The signal can come in the form of a claim, a narrative, a dataset, a model or anything. Since the objective it to appease you to a certain viewpoint regardless of whether something is factually true or not, or to only impart partial observability, then it is more likely to be less of a (complete) factual viewpoint. So NOT-A is potentially more likely than A (of whatever they are suggesting), although COUNTER-A might or might not be applicable. If the source is generating many conflicting A, B, non-A, non-B, then usually it is either some combination of few, or none of such logical sub-set claims (on a heuristical basis). Or the causal relation has some complexity to it, because one has to spend a proportional amount of $$$ to create multiple nuanced viewpoints -- it is much easier to fund only 1 or 2 viewpoints.
When a signal is not repeated (non-contested), it is likely to be more factual in the sense that there is no underlying motive to benefit from it. Ways-to-live claims are always very contested because everyone wants you to buy, or live a certain way, same with X-is-enemy or X-is-foe claims. Like the fact that the energy rate-of-return on per capita investment into biotic fuels is diminishing and no new megafaunas have been discovered is likely to be true. Things such as the world is falling or crazy-events-is-evening-and-you-must-do-something-now are likely to be overstated.
Intelligent people tend to add qualifiers and nuances (in generality) to avoid overstating/understating a case, although such formalities are usually meant to obfuscate at times (i.e. saying General Skills Level instead of Intelligence), regardless though every 'research' wants 'novel' results and you are not going to get funding if you cannot display 'results' in the world so we end up with this replication process and infinite ad nausem so-and-so claims in this world in particularly non-falsifiable fields. Mathematical and physics-fields are least susceptible to this because the statements are syntactically intractable and you cannot just out-interpret your way or lawyer-ify ambiguify your way out of any conceivable statement.
Thanks for the link! Perhaps he was influenced by my piece? :) I really enjoy Richard Hanania and Rob Henderson's discussions.
I'm not exactly sure why but the left seems particularly fixated on language moreso than the right. I don't know if it's an inherent feature. I've written about this before [1]. These word games are often very apparent. The PATRIOT and FREEDOM act are so contrived that they make good examples.
I think that often the opponents of behavorial genetics engage in what I think of as "inflammatory rephrasing." It's not necessarily a strawman but it is basically a rather appaling formulation of your belief they can give. I think this shouldn't work if someone is thinking clearly.
An important observation: "Those who discuss controversial research agendas in the face of intense scrutiny will have disproportionately non-conformist personalities."
This point was made especially well by Geoffrey Miller in The Neurodiversity Case for Free Speech: https://www.youtube.com/watch?v=FWAtE0tahtM
The best discussions can be had between non-conformists.
Johnny, have you considered debating Dutton and Menie on why embryo selection can have a larger impact than they argue (due to the "ick" factor which IVF also suffered from), because it will be disproportionately used by the impactful smart fraction?
I don't think Menie is a public person anymore. Dutton might talk to Jonathan if he reaches out. I asked Alex Kaschuta to talk to Johnny. She had on JF Gariepy on Subversive who had a pretty pessimistic attitude about genetic enhancement and I wanted some more positive arguments.
I read Noah Carl and Emil Kirkegaard's paper recently and I think a good case would be made that a very smart smart fraction would be very beneficial to society (https://kirkegaard.substack.com/p/smart-fraction-theory-vindicated/comment/11733826). Healthier people, happier people, and smarter people are good for everyone.
Maybe you are the right person to join him on his show, or ask him onto your pod. You're well-spoken.
Maybe someday!
Thank you. I seriously appreciate that.
Nothing to appreciate - if you think selflessly and without ego, you know it's simply true. If you desire to shape the future in the way we know it must be shaped, making your voice in the mould of Dwarkesh is a good tool for this goal.
Also, as for your substack, I think you can see what topic gets the most hearts and comments. Even a short clip with Dutton would have great metrics, and add a lot of subs to the substack.
The interesting thing here is that this line of thought probably holds true whether or not the controversial idea is actually true. I mean, when looking at a true idea like heritability of personal characteristics, you would expect arguments against it to be weaker than arguments in its favour anyway (even if it were wholly non-controversial and apolitical) simply because it is true. But the discussion above suggests that you might find ideas with good arguments in favour and poorer arguments against even if they are false. I suppose the lesson is to look for the best arguments on each side, rather than the average argument.
I've been musing on this with respect to energy sources. Arguments in favour of fossil fuels appear, on average, to me, to be better thought out than arguments in favour of renewables. I still think the fossil fuel side is basically wrong, possibly down to this dynamic.
I say it depends.
It is possible to have a ridiculous hypothesis that is non-defensible nor heuristically plausible; a floating teapot around the Saturn rings are exhibiting pulsating negative energy waves that are changing the momentum of every individual into evil beings.
Most people who have objectional hypotheses do not however reason like this or make claims as such.
There are two demographics of counterfactual narrative claims (in any field);
(a) idea-novelists, skeptical analysts, system-completeness thinkers
- we see this with 'washing hands', 'smoking', 'mercury', 'thalidomide', 'male vs female sex dimorphism', 'epidemiological claims' etc
(b) crazy exceptionals, delusionals, people on low-latent inhibition + some disorders or disinclinations, or low IQ + low propensity for mechanistic reasoning
and also (c) money-snake griefters (will say anything for $$ for confirmation bias)
Both of these functional groups must be low on agreeableness.
Both of these groups must be able to ignore the emotional stimuli of ostracization or planted associative (positive) or (negative) stimuli by the communities around them.
The first population will have one or more dispositions
(a) to evaluate the premises carefully
(b) verify the implications of a conclusion
(c) reconciliate sensory evidence with models, compare information, cross-reference for consilience, congruency
The second population has a lower cognitive threshold for filtering information but is more suggestible, or has a tenaciousness for wanting to belong in a counter-narrative group.
The third thing to note is that any repeated signal in the real-world from a finite number of sources that are intrinsically-linked (i.e. funded from same organization) tend to have motives and objectives, whether it be industry or political. The signal can come in the form of a claim, a narrative, a dataset, a model or anything. Since the objective it to appease you to a certain viewpoint regardless of whether something is factually true or not, or to only impart partial observability, then it is more likely to be less of a (complete) factual viewpoint. So NOT-A is potentially more likely than A (of whatever they are suggesting), although COUNTER-A might or might not be applicable. If the source is generating many conflicting A, B, non-A, non-B, then usually it is either some combination of few, or none of such logical sub-set claims (on a heuristical basis). Or the causal relation has some complexity to it, because one has to spend a proportional amount of $$$ to create multiple nuanced viewpoints -- it is much easier to fund only 1 or 2 viewpoints.
When a signal is not repeated (non-contested), it is likely to be more factual in the sense that there is no underlying motive to benefit from it. Ways-to-live claims are always very contested because everyone wants you to buy, or live a certain way, same with X-is-enemy or X-is-foe claims. Like the fact that the energy rate-of-return on per capita investment into biotic fuels is diminishing and no new megafaunas have been discovered is likely to be true. Things such as the world is falling or crazy-events-is-evening-and-you-must-do-something-now are likely to be overstated.
Intelligent people tend to add qualifiers and nuances (in generality) to avoid overstating/understating a case, although such formalities are usually meant to obfuscate at times (i.e. saying General Skills Level instead of Intelligence), regardless though every 'research' wants 'novel' results and you are not going to get funding if you cannot display 'results' in the world so we end up with this replication process and infinite ad nausem so-and-so claims in this world in particularly non-falsifiable fields. Mathematical and physics-fields are least susceptible to this because the statements are syntactically intractable and you cannot just out-interpret your way or lawyer-ify ambiguify your way out of any conceivable statement.
Thanks for the link! Perhaps he was influenced by my piece? :) I really enjoy Richard Hanania and Rob Henderson's discussions.
I'm not exactly sure why but the left seems particularly fixated on language moreso than the right. I don't know if it's an inherent feature. I've written about this before [1]. These word games are often very apparent. The PATRIOT and FREEDOM act are so contrived that they make good examples.
I think that often the opponents of behavorial genetics engage in what I think of as "inflammatory rephrasing." It's not necessarily a strawman but it is basically a rather appaling formulation of your belief they can give. I think this shouldn't work if someone is thinking clearly.
[1] https://parrhesia.substack.com/p/playing-word-games-with-the-woke