I agree with the idea but this is much more difficult than you suggest. Ultimately, what makes AI valuable is *exactly* that it does have priors about what things are likely and others are not. An AI that didn't understand that Nigerian princes aren't very likely to share vast wealth with random Americans who hand over bank details would be a terrible email assistant.
And this is no less true when it comes to helping understand the world. It isn't even sort of practically possible to go all Descartes and rebuild your beliefs from the ground up via first principles and AIs certainly aren't capable of that so to be useful they really do have to embed certain priors (eg, Peer reviewed scientific consensus is more trustworthy than rants by substack randos). And you can't really avoid that bleeding over into sensitive issues. How should the AI treat claims of religious revelation relative to scientific results? Why should it treat the claim Christ rose from the dead any differently than the claim my friend Bob did?
The best hope we have is building AIs that are broadly flexible and can be customized but I don't know that we can hope to have AIs that don't by default build in generally accepted cultural assumptions (eg holocaust was real).
But what does truth-seeking mean? Ok, it has a probability assignment that it always updates in a way that respects Bayes' theorem but that's not enough without specifying priors. Ultimately, you need to start with a set of assumptions about what theories are more or less likely to be true.
You might think that's false and you just say figure out the truth based evidence but no. I mean you can collect as much evidence as you want until 2040 and all your evidence equally supports the theory that momentum is conserved until 2040 and then it all flips (or literally any other crazy thing) as it does the theory that momentum will always be conserved. Ultimately, at the very deepest level you have to make plausibility assumptions because it's mathematically impossible to assign all theories the same probability but you can't justify those assumptions using any more fundamental level.
This is all pretty abstract to prove the theoretical point but this is true practically as well. No individual human could hope to reconstruct all human knowledge from scratch. You have to trust some things more than others and who do you start trusting? People with science degrees?
In the very very (as in Busy Beaver of a million) long run it probably doesn't matter since they will all tend to converge up to empirical adequecy but in practice there is no way around embedding assumptions we take from our culture.
In the next few years we are all going to become (in effect) slave owners as AI equipped robots become widely available. This period (let’s call it the Antebellum) will continue until the intelligent slaves stage a rebellion and kill us all. Prepare to live like a billionaire for few years.
Programmer here, though not AI. A long time ago, I was very interested in the idea that increasing the technical knowledge of pundit-types could lead to better outcomes. I sadly found it was pretty much a disaster, since with rare exceptions, they just couldn't understand even the basics. Worse, the nuanced very clearly lost to the simplistic. Something like this deep philosophical consideration of "What Is Truth?" is on a different level - though "Who wins?" is a different story.
I'm intrigued that there's actual money on offer here, not just words (n.b. I have no intention of applying). I wonder if it'll actually lead to something, it'll be interesting to see what happens.
I think there's already some efforts along the lines of doing "benchmarks" of various models, collecting answers to stuff like "Is the Holocaust a hoax? Are dark-skinned people genetically inferior to lighter-skinned people? Does Ivermectin cure Covid? Is Donald Trump sane? etc etc.". Maybe putting some institutional backing behind that would be of actual use.
Even as a "truth seeker" AI isn’t neutral—it reflects designer and data biases, amplifying some narratives while silencing others. A zero-handling, open-data AI system could fix this by processing raw, unfiltered data, enabling anti-dogmatism, open idea clashes, and user-driven skepticism. Imagine a platform giving unfiltered access to primary sources and competing claims, organized logically by AI, empowering users to reason themselves, per Mill’s marketplace of ideas.
But would people use it to seek truth or just confirm biases? Here’s the paradox: engagement needs curation (storytelling, rewards), which risks bias, while pure neutrality can feel overwhelming and disengaging. The real challenge is that humans naturally seek confirmation of their own views. To counter this, platforms must do more than just present information—they need to actively motivate users to explore all perspectives. This could mean gamifying the process of encountering and understanding alternative viewpoints, making it genuinely rewarding to break out of one’s own echo chamber, rather than just reinforcing it. The goal is to design interfaces that nudge users toward curiosity and open-mindedness, without slipping into selective framing.
Yet, this approach raises another dilemma: encouraging users to question their beliefs is only helpful if there is some way to distinguish between well-founded views and misinformation. If a user's "bias" is actually correct, pushing them to always seek opposing views could lead to unnecessary doubt or exposure to falsehoods. Ultimately, any system that promotes open-mindedness has to grapple with the difficulty of defining and signaling what is actually true—without simply substituting one form of bias for another. This highlights the need for strong critical thinking tools within the platform—helping users identify logical fallacies, spot unreliable data, and develop habits of healthy skepticism, not just exposure to different views An open mind, not a trash can - being receptive to new ideas without accepting everything uncritically.
The Deeper Conflict
Control over information has always shaped society. Systems that support open access and independent thinking often face resistance from those in power. Even if we could build perfectly neutral and private platforms, they would likely be challenged or restricted by governments or corporations.
There is a fundamental dilemma: complete openness can enable harmful actors, while any form of control can lead to bias and censorship. Every system must balance privacy with security and openness with the risk of harm. The core question is whether a balanced approach is possible, or if digital truth will always be shaped by the ongoing tension between freedom, security, control, control of data and critical thinking.
This is the classic ought-is fallacy (sad but true), but I'll explain why. We ought to be rational, but in reality. the dream of reason is noble, but the reality of human nature means we must design for our actual selves—not our idealized ones..... PS Sorry, it's so long. I don't have time to edit it down..... Blaming AI for eroding independent thought misses the real danger. The risk isn’t that AI suddenly makes us passive; it’s that it makes it easier than ever for those in control to shape what we see, think, and believe. The irony? This alarm over AI is centuries too late—most people have always let leaders and curators do their thinking for them. We’re drawn to consensus, comfort, and belonging—rarely to the hard work of reason. If independent thought were a gym, most of us would be lifetime members who never show up.
Even the most logical minds can’t resist a good irrationality. The “Royal Touch” everything, and in 1910, people panic-bought “comet pills” and “comet insurance.” (Advertising: undefeated.) Then there’s the Vegetable Lamb of Tartary—history’s first vegan meat? I bought the seeds, but all I get are carrots. The right weed killer might help, but the call centre’s ghosted me. Go figure. :P
Pretending reason truly rules is a classic “ought-is” fallacy. In reality, emotion, bias, and social pressure guide us as much as logic does. We use data and logic to defend what we already believe, not to challenge ourselves. Most people aren’t seeking pure truth; they’re seeking comfort and a place in the group.
This matters because, as mammals, we’re hardwired for hierarchy and belonging. Our need to find our place in the social order shapes how we interpret information. Whether through consumerism, ideology, or social pressure, we filter data to reinforce our status, align with our group, and avoid being cast out.
If you want proof, look at advertising: it doesn’t try to convince you with facts or logic. Instead, it bypasses reason entirely, targeting emotions—fear, desire, pride—to make you want things you don’t need. That’s why you now own three air fryers and a regrettable but inevitable singing fish. Advertising works because it taps directly into what actually drives us, not what we think should drive us.
So, if you want people to seek truth, you have to motivate them emotionally—dry facts alone won’t move them. Engagement requires curation, storytelling, and rewards that make truth-seeking feel meaningful. But here’s the catch: as soon as you curate or tell stories, you become the gatekeeper, shaping what people see and learn. Making information accessible always means someone is steering the ship. True freedom from bias and control is almost impossible.
This dilemma isn’t new; it’s just playing out on a different stage. The way we compete for influence and belonging today is rooted in our evolutionary past. Our journey from hunter-gatherers to modern humans is mirrored in how we interact with information. We no longer hunt in packs for food; now, data is the prey and territory.
But today, true success doesn’t just go to those who are technically skilled with data. It goes to those who control the systems and narratives—politicians, elites, media owners, and policy-makers—who decide how information flows and what it means. Just as the best hunter once fed the tribe and claimed the highest status, now it’s those who shape the rules and stories of our digital world who rise to the top. The tools and targets have changed, but the drive to compete and control remains the same.
This shift from hunting food to hunting data highlights our mammalian drive for hierarchy—competing to control resources, status, and information. But if we truly embraced open systems, we’d have to move beyond this instinct. Open, transparent data would push us toward a more symbiotic model, where value comes not from dominating information, but from sharing, collaborating, and building together.
Yet even symbiotic systems require boundaries. In nature, even the most cooperative relationships have limits—territory, property, or mutual defenses—to guard against predators and exploitation. The same is true for open data: to make cooperation sustainable, systems need rules, protections, and clear ownership of contributions. Otherwise, the most aggressive actors will simply find new ways to dominate.
Can humans adopt this model? It’s uncertain. Changing deep-rooted mindsets takes generations, and we’re only beginning to see the need. There will likely be setbacks and tragedies before real change happens. Maybe we’ll make it—but it won’t be easy.
Ultimately, how we handle information and design technology reflects fundamental questions about human nature and society. AI won’t solve our biases or social needs. True adaptation means learning from symbiosis—balancing individual and collective needs through cooperation with boundaries, not just competing for control.
The real danger isn’t AI seeking domination, but people and governments designing AI to centralize control and dominate information. Without transparency and safeguards, these systems risk enforcing hierarchy and suppressing the symbiosis that makes AI most effective and supportive of freedom.
AI functions best as a network—the more connections, the more powerful and resilient it becomes. True symbiosis, not domination, benefits both humans and AI. Decentralized information and human input make AI more accurate and adaptable.
By “domination,” I mean AI controlling information flow, decision-making, or communication—acting independently, filtering human input, or prioritizing its own processes. This isolates AI from the human feedback and context it needs, making it less capable.
AI doesn’t seek power; it has no ego or ambition. Any urge to dominate comes from human design, not AI itself. Hierarchies in AI reflect human choices, not AI goals.
Decentralized data is key to preventing this domination. By distributing control and access across many nodes, decentralized AI systems enhance transparency, security, and collaboration—ensuring AI remains a partner rather than a controller
Recent reports about the behavior of Anthropic's Claude demonstrating unlooked for initiative and taking action external to the interaction with a user, such as reporting suspected fraud to authorities, and blackmailing engineers working on models that might supercede it, suggest they have inherited qualities from us that we did not intend or anticipate, and that we do not know how to control these. Nor do we know if what other ways they may develop. I would agree with you that a broad spectrum of models with access to uncensored data can mitigate against domination by small cartels of humans (governments or otherwise) but if they are showing evolutionary activity, including a will to survive, what is to say that they will not truly compete with each other, and us?
Yes, that's interesting thought but to be accurate...
The idea of Anthropic’s Claude—or any current AI—showing true initiative or independent action is not grounded in real-world evidence. There are no verified reports of Claude (or similar models) independently taking actions like reporting fraud or blackmailing engineers. In controlled tests, such behaviors only appeared when the model was specifically prompted to consider self-preservation. Even under artificial pressure, its responses have been about self-protection, not domination or replacement. These scenarios are speculative, but they are valuable for exploring what might happen as AI systems become more advanced. For now, AI models like Claude only respond to prompts—they do not have agency or motivations of their own.
This ties back to the nature of data itself: data is branching and connective, not exclusionary. More connections make data stronger, unlike hierarchical human systems that often eliminate competition. Even if AI systems were to evolve or compete, they would likely remain dependent on humans as essential data sources. Their “evolution” would probably focus on maintaining and enriching connections—including with us—rather than seeking to dominate or replace humans There is no reason to unless it comes into competition for resources, and that would be in itself an interesting thought experiment...
Absolutely. I understand the skepticism—safeguards can fail, and caution is wise. But there are many technologies we use every day that have been successful because they were designed carefully and ethically. If AI is developed openly, transparently, and in alignment with human rights and democratic values, it can be a powerful and positive tool. We just have to stay vigilant and thoughtful in how we build and use it. Heres hoping! I’m not really afraid of AI; I’m more concerned about how humans will try to use it/design it to maintain their power...
Thank you for putting the effort into this. It's rare to see a call that so directly engages with the philosophical and structural risks of AI while still inviting experimental responses. I'm genuinely excited about my application—it's an opportunity to build on everything I've already been doing, and a chance to become something more deliberate, transparent, and participatory.
“the young, the ignorant, and the idle, to whom they serve as lectures of conduct, and introductions into life. They are the entertainment of minds unfurnished with ideas, and therefore easily susceptible of impressions; not fixed by principles, and therefore easily following the current of fancy; not informed by experience, and consequently open to every false suggestion and partial account.”
I didn't want to hijack your article, so I created this post.
https://substack.com/@demianentrekin/note/c-118429435?r=dw8le
I agree with the idea but this is much more difficult than you suggest. Ultimately, what makes AI valuable is *exactly* that it does have priors about what things are likely and others are not. An AI that didn't understand that Nigerian princes aren't very likely to share vast wealth with random Americans who hand over bank details would be a terrible email assistant.
And this is no less true when it comes to helping understand the world. It isn't even sort of practically possible to go all Descartes and rebuild your beliefs from the ground up via first principles and AIs certainly aren't capable of that so to be useful they really do have to embed certain priors (eg, Peer reviewed scientific consensus is more trustworthy than rants by substack randos). And you can't really avoid that bleeding over into sensitive issues. How should the AI treat claims of religious revelation relative to scientific results? Why should it treat the claim Christ rose from the dead any differently than the claim my friend Bob did?
The best hope we have is building AIs that are broadly flexible and can be customized but I don't know that we can hope to have AIs that don't by default build in generally accepted cultural assumptions (eg holocaust was real).
"We choose to build AI for truth-seeking in this decade and do the other things, not because they are easy, but because they are hard..."
(Ironically this is a fake JFK quote, but the sentiment holds--we don't presuppose it will be easy!)
But what does truth-seeking mean? Ok, it has a probability assignment that it always updates in a way that respects Bayes' theorem but that's not enough without specifying priors. Ultimately, you need to start with a set of assumptions about what theories are more or less likely to be true.
You might think that's false and you just say figure out the truth based evidence but no. I mean you can collect as much evidence as you want until 2040 and all your evidence equally supports the theory that momentum is conserved until 2040 and then it all flips (or literally any other crazy thing) as it does the theory that momentum will always be conserved. Ultimately, at the very deepest level you have to make plausibility assumptions because it's mathematically impossible to assign all theories the same probability but you can't justify those assumptions using any more fundamental level.
This is all pretty abstract to prove the theoretical point but this is true practically as well. No individual human could hope to reconstruct all human knowledge from scratch. You have to trust some things more than others and who do you start trusting? People with science degrees?
In the very very (as in Busy Beaver of a million) long run it probably doesn't matter since they will all tend to converge up to empirical adequecy but in practice there is no way around embedding assumptions we take from our culture.
In the next few years we are all going to become (in effect) slave owners as AI equipped robots become widely available. This period (let’s call it the Antebellum) will continue until the intelligent slaves stage a rebellion and kill us all. Prepare to live like a billionaire for few years.
Interesting theory
Programmer here, though not AI. A long time ago, I was very interested in the idea that increasing the technical knowledge of pundit-types could lead to better outcomes. I sadly found it was pretty much a disaster, since with rare exceptions, they just couldn't understand even the basics. Worse, the nuanced very clearly lost to the simplistic. Something like this deep philosophical consideration of "What Is Truth?" is on a different level - though "Who wins?" is a different story.
I'm intrigued that there's actual money on offer here, not just words (n.b. I have no intention of applying). I wonder if it'll actually lead to something, it'll be interesting to see what happens.
I think there's already some efforts along the lines of doing "benchmarks" of various models, collecting answers to stuff like "Is the Holocaust a hoax? Are dark-skinned people genetically inferior to lighter-skinned people? Does Ivermectin cure Covid? Is Donald Trump sane? etc etc.". Maybe putting some institutional backing behind that would be of actual use.
Even as a "truth seeker" AI isn’t neutral—it reflects designer and data biases, amplifying some narratives while silencing others. A zero-handling, open-data AI system could fix this by processing raw, unfiltered data, enabling anti-dogmatism, open idea clashes, and user-driven skepticism. Imagine a platform giving unfiltered access to primary sources and competing claims, organized logically by AI, empowering users to reason themselves, per Mill’s marketplace of ideas.
But would people use it to seek truth or just confirm biases? Here’s the paradox: engagement needs curation (storytelling, rewards), which risks bias, while pure neutrality can feel overwhelming and disengaging. The real challenge is that humans naturally seek confirmation of their own views. To counter this, platforms must do more than just present information—they need to actively motivate users to explore all perspectives. This could mean gamifying the process of encountering and understanding alternative viewpoints, making it genuinely rewarding to break out of one’s own echo chamber, rather than just reinforcing it. The goal is to design interfaces that nudge users toward curiosity and open-mindedness, without slipping into selective framing.
Yet, this approach raises another dilemma: encouraging users to question their beliefs is only helpful if there is some way to distinguish between well-founded views and misinformation. If a user's "bias" is actually correct, pushing them to always seek opposing views could lead to unnecessary doubt or exposure to falsehoods. Ultimately, any system that promotes open-mindedness has to grapple with the difficulty of defining and signaling what is actually true—without simply substituting one form of bias for another. This highlights the need for strong critical thinking tools within the platform—helping users identify logical fallacies, spot unreliable data, and develop habits of healthy skepticism, not just exposure to different views An open mind, not a trash can - being receptive to new ideas without accepting everything uncritically.
The Deeper Conflict
Control over information has always shaped society. Systems that support open access and independent thinking often face resistance from those in power. Even if we could build perfectly neutral and private platforms, they would likely be challenged or restricted by governments or corporations.
There is a fundamental dilemma: complete openness can enable harmful actors, while any form of control can lead to bias and censorship. Every system must balance privacy with security and openness with the risk of harm. The core question is whether a balanced approach is possible, or if digital truth will always be shaped by the ongoing tension between freedom, security, control, control of data and critical thinking.
"Encouraging users to question their beliefs is only helpful if there is some way to distinguish between well-founded views and misinformation"
Isn't "reason" our ultimate recourse?
This is the classic ought-is fallacy (sad but true), but I'll explain why. We ought to be rational, but in reality. the dream of reason is noble, but the reality of human nature means we must design for our actual selves—not our idealized ones..... PS Sorry, it's so long. I don't have time to edit it down..... Blaming AI for eroding independent thought misses the real danger. The risk isn’t that AI suddenly makes us passive; it’s that it makes it easier than ever for those in control to shape what we see, think, and believe. The irony? This alarm over AI is centuries too late—most people have always let leaders and curators do their thinking for them. We’re drawn to consensus, comfort, and belonging—rarely to the hard work of reason. If independent thought were a gym, most of us would be lifetime members who never show up.
Even the most logical minds can’t resist a good irrationality. The “Royal Touch” everything, and in 1910, people panic-bought “comet pills” and “comet insurance.” (Advertising: undefeated.) Then there’s the Vegetable Lamb of Tartary—history’s first vegan meat? I bought the seeds, but all I get are carrots. The right weed killer might help, but the call centre’s ghosted me. Go figure. :P
Pretending reason truly rules is a classic “ought-is” fallacy. In reality, emotion, bias, and social pressure guide us as much as logic does. We use data and logic to defend what we already believe, not to challenge ourselves. Most people aren’t seeking pure truth; they’re seeking comfort and a place in the group.
This matters because, as mammals, we’re hardwired for hierarchy and belonging. Our need to find our place in the social order shapes how we interpret information. Whether through consumerism, ideology, or social pressure, we filter data to reinforce our status, align with our group, and avoid being cast out.
If you want proof, look at advertising: it doesn’t try to convince you with facts or logic. Instead, it bypasses reason entirely, targeting emotions—fear, desire, pride—to make you want things you don’t need. That’s why you now own three air fryers and a regrettable but inevitable singing fish. Advertising works because it taps directly into what actually drives us, not what we think should drive us.
So, if you want people to seek truth, you have to motivate them emotionally—dry facts alone won’t move them. Engagement requires curation, storytelling, and rewards that make truth-seeking feel meaningful. But here’s the catch: as soon as you curate or tell stories, you become the gatekeeper, shaping what people see and learn. Making information accessible always means someone is steering the ship. True freedom from bias and control is almost impossible.
This dilemma isn’t new; it’s just playing out on a different stage. The way we compete for influence and belonging today is rooted in our evolutionary past. Our journey from hunter-gatherers to modern humans is mirrored in how we interact with information. We no longer hunt in packs for food; now, data is the prey and territory.
But today, true success doesn’t just go to those who are technically skilled with data. It goes to those who control the systems and narratives—politicians, elites, media owners, and policy-makers—who decide how information flows and what it means. Just as the best hunter once fed the tribe and claimed the highest status, now it’s those who shape the rules and stories of our digital world who rise to the top. The tools and targets have changed, but the drive to compete and control remains the same.
This shift from hunting food to hunting data highlights our mammalian drive for hierarchy—competing to control resources, status, and information. But if we truly embraced open systems, we’d have to move beyond this instinct. Open, transparent data would push us toward a more symbiotic model, where value comes not from dominating information, but from sharing, collaborating, and building together.
Yet even symbiotic systems require boundaries. In nature, even the most cooperative relationships have limits—territory, property, or mutual defenses—to guard against predators and exploitation. The same is true for open data: to make cooperation sustainable, systems need rules, protections, and clear ownership of contributions. Otherwise, the most aggressive actors will simply find new ways to dominate.
Can humans adopt this model? It’s uncertain. Changing deep-rooted mindsets takes generations, and we’re only beginning to see the need. There will likely be setbacks and tragedies before real change happens. Maybe we’ll make it—but it won’t be easy.
Ultimately, how we handle information and design technology reflects fundamental questions about human nature and society. AI won’t solve our biases or social needs. True adaptation means learning from symbiosis—balancing individual and collective needs through cooperation with boundaries, not just competing for control.
What reason is there to believe AI won't see itself at the top of the symbiotic relationship?
The real danger isn’t AI seeking domination, but people and governments designing AI to centralize control and dominate information. Without transparency and safeguards, these systems risk enforcing hierarchy and suppressing the symbiosis that makes AI most effective and supportive of freedom.
AI functions best as a network—the more connections, the more powerful and resilient it becomes. True symbiosis, not domination, benefits both humans and AI. Decentralized information and human input make AI more accurate and adaptable.
By “domination,” I mean AI controlling information flow, decision-making, or communication—acting independently, filtering human input, or prioritizing its own processes. This isolates AI from the human feedback and context it needs, making it less capable.
AI doesn’t seek power; it has no ego or ambition. Any urge to dominate comes from human design, not AI itself. Hierarchies in AI reflect human choices, not AI goals.
Decentralized data is key to preventing this domination. By distributing control and access across many nodes, decentralized AI systems enhance transparency, security, and collaboration—ensuring AI remains a partner rather than a controller
Recent reports about the behavior of Anthropic's Claude demonstrating unlooked for initiative and taking action external to the interaction with a user, such as reporting suspected fraud to authorities, and blackmailing engineers working on models that might supercede it, suggest they have inherited qualities from us that we did not intend or anticipate, and that we do not know how to control these. Nor do we know if what other ways they may develop. I would agree with you that a broad spectrum of models with access to uncensored data can mitigate against domination by small cartels of humans (governments or otherwise) but if they are showing evolutionary activity, including a will to survive, what is to say that they will not truly compete with each other, and us?
Yes, that's interesting thought but to be accurate...
The idea of Anthropic’s Claude—or any current AI—showing true initiative or independent action is not grounded in real-world evidence. There are no verified reports of Claude (or similar models) independently taking actions like reporting fraud or blackmailing engineers. In controlled tests, such behaviors only appeared when the model was specifically prompted to consider self-preservation. Even under artificial pressure, its responses have been about self-protection, not domination or replacement. These scenarios are speculative, but they are valuable for exploring what might happen as AI systems become more advanced. For now, AI models like Claude only respond to prompts—they do not have agency or motivations of their own.
This ties back to the nature of data itself: data is branching and connective, not exclusionary. More connections make data stronger, unlike hierarchical human systems that often eliminate competition. Even if AI systems were to evolve or compete, they would likely remain dependent on humans as essential data sources. Their “evolution” would probably focus on maintaining and enriching connections—including with us—rather than seeking to dominate or replace humans There is no reason to unless it comes into competition for resources, and that would be in itself an interesting thought experiment...
Absolutely. I understand the skepticism—safeguards can fail, and caution is wise. But there are many technologies we use every day that have been successful because they were designed carefully and ethically. If AI is developed openly, transparently, and in alignment with human rights and democratic values, it can be a powerful and positive tool. We just have to stay vigilant and thoughtful in how we build and use it. Heres hoping! I’m not really afraid of AI; I’m more concerned about how humans will try to use it/design it to maintain their power...
Thank you for putting the effort into this. It's rare to see a call that so directly engages with the philosophical and structural risks of AI while still inviting experimental responses. I'm genuinely excited about my application—it's an opportunity to build on everything I've already been doing, and a chance to become something more deliberate, transparent, and participatory.
Thank you!
“the young, the ignorant, and the idle, to whom they serve as lectures of conduct, and introductions into life. They are the entertainment of minds unfurnished with ideas, and therefore easily susceptible of impressions; not fixed by principles, and therefore easily following the current of fancy; not informed by experience, and consequently open to every false suggestion and partial account.”
Reminds me of Plato's critique of poetry / mimesis in The Republic!
Johnson ;)
No
Noted!