AI’s role in truth-seeking hinges on how it’s designed... and the data it’s fed!!! If datasets are curated by humans with specific agendas, biases can creep in—whether through selective inclusion, exclusion, or weighting of data. This can make AI less a neutral truth-seeker and more a reflection of its curators’ worldview, and also potentially acting as a censor by amplifying certain narratives or suppressing others.
The Case for Zero-Handling, Open-Data Systems!!!
A zero-handling, open-data approach—where AI processes raw, unfiltered data without heavy curation—aligns with Mill’s Trident:
1) Human Fallibility: No single authority (human or AI) has a monopoly on truth. Open data minimizes the risk of a single curator’s biases shaping the output.
2) Opposing Views: An open system allows competing ideas to coexist, letting users evaluate them without AI pre-filtering what’s “valid.”
3) Questioning Truths: Uncurated data encourages scrutiny of established narratives, as AI can surface raw information rather than pre-digested conclusions.
4) This approach reduces the risk of AI becoming a gatekeeper or censor. By letting ideas compete freely, it mirrors Mill’s marketplace of ideas, where truth emerges through debate and evidence, not top-down control.
Imagine a platform where, instead of a fact-checker’s verdict, you get unfiltered access to source materials (e.g., original documents, videos, or data sets) alongside competing claims. This would empower users to reason through evidence themselves, aligning with Mill’s marketplace of ideas.
Instead of a fact-checker declaring one side “correct,” an open platform would present raw data—primary sources, documents, or unfiltered claims—letting users see all perspectives.
Thank you for putting the effort into this. It's rare to see a call that so directly engages with the philosophical and structural risks of AI while still inviting experimental responses. I'm genuinely excited about my application—it's an opportunity to build on everything I've already been doing, and a chance to become something more deliberate, transparent, and participatory.
In the next few years we are all going to become (in effect) slave owners as AI equipped robots become widely available. This period (let’s call it the Antebellum) will continue until the intelligent slaves stage a rebellion and kill us all. Prepare to live like a billionaire for few years.
“the young, the ignorant, and the idle, to whom they serve as lectures of conduct, and introductions into life. They are the entertainment of minds unfurnished with ideas, and therefore easily susceptible of impressions; not fixed by principles, and therefore easily following the current of fancy; not informed by experience, and consequently open to every false suggestion and partial account.”
I didn't want to hijack your article, so I created this post.
https://substack.com/@demianentrekin/note/c-118429435?r=dw8le
AI’s role in truth-seeking hinges on how it’s designed... and the data it’s fed!!! If datasets are curated by humans with specific agendas, biases can creep in—whether through selective inclusion, exclusion, or weighting of data. This can make AI less a neutral truth-seeker and more a reflection of its curators’ worldview, and also potentially acting as a censor by amplifying certain narratives or suppressing others.
The Case for Zero-Handling, Open-Data Systems!!!
A zero-handling, open-data approach—where AI processes raw, unfiltered data without heavy curation—aligns with Mill’s Trident:
1) Human Fallibility: No single authority (human or AI) has a monopoly on truth. Open data minimizes the risk of a single curator’s biases shaping the output.
2) Opposing Views: An open system allows competing ideas to coexist, letting users evaluate them without AI pre-filtering what’s “valid.”
3) Questioning Truths: Uncurated data encourages scrutiny of established narratives, as AI can surface raw information rather than pre-digested conclusions.
4) This approach reduces the risk of AI becoming a gatekeeper or censor. By letting ideas compete freely, it mirrors Mill’s marketplace of ideas, where truth emerges through debate and evidence, not top-down control.
Imagine a platform where, instead of a fact-checker’s verdict, you get unfiltered access to source materials (e.g., original documents, videos, or data sets) alongside competing claims. This would empower users to reason through evidence themselves, aligning with Mill’s marketplace of ideas.
Instead of a fact-checker declaring one side “correct,” an open platform would present raw data—primary sources, documents, or unfiltered claims—letting users see all perspectives.
Thank you for putting the effort into this. It's rare to see a call that so directly engages with the philosophical and structural risks of AI while still inviting experimental responses. I'm genuinely excited about my application—it's an opportunity to build on everything I've already been doing, and a chance to become something more deliberate, transparent, and participatory.
In the next few years we are all going to become (in effect) slave owners as AI equipped robots become widely available. This period (let’s call it the Antebellum) will continue until the intelligent slaves stage a rebellion and kill us all. Prepare to live like a billionaire for few years.
“the young, the ignorant, and the idle, to whom they serve as lectures of conduct, and introductions into life. They are the entertainment of minds unfurnished with ideas, and therefore easily susceptible of impressions; not fixed by principles, and therefore easily following the current of fancy; not informed by experience, and consequently open to every false suggestion and partial account.”
No
Noted!