Discussion about this post

User's avatar
Demian Entrekin 🏴‍☠️'s avatar

I didn't want to hijack your article, so I created this post.

https://substack.com/@demianentrekin/note/c-118429435?r=dw8le

Expand full comment
Peter Gerdes's avatar

I agree with the idea but this is much more difficult than you suggest. Ultimately, what makes AI valuable is *exactly* that it does have priors about what things are likely and others are not. An AI that didn't understand that Nigerian princes aren't very likely to share vast wealth with random Americans who hand over bank details would be a terrible email assistant.

And this is no less true when it comes to helping understand the world. It isn't even sort of practically possible to go all Descartes and rebuild your beliefs from the ground up via first principles and AIs certainly aren't capable of that so to be useful they really do have to embed certain priors (eg, Peer reviewed scientific consensus is more trustworthy than rants by substack randos). And you can't really avoid that bleeding over into sensitive issues. How should the AI treat claims of religious revelation relative to scientific results? Why should it treat the claim Christ rose from the dead any differently than the claim my friend Bob did?

The best hope we have is building AIs that are broadly flexible and can be customized but I don't know that we can hope to have AIs that don't by default build in generally accepted cultural assumptions (eg holocaust was real).

Expand full comment
13 more comments...

No posts