Discussion about this post

User's avatar
COMRADITY's avatar

Now that the Supreme Court decisions are made, legislators can draft new “Supreme Court-proof” policy. Here’s an idea - what do you think?

Social Media algorithms have “learned” that hate speech, and negative words make money for the company. That technology could be programmed to balance for hate and negativity by searching for and amplifying counterpoints which are underexposed, due to the bias in Social Media AI, based on a metric I call “real time reach”.

To incentivize balance, the Social Media companies should report results for issues exceeding a “real time reach” of 15% to the FCC. When balance is not achieved, the Social Media Company will not be protected by the Section 230 liability exemption for related damages.

Having worked in media that is not exempt from liability, in my experience, fear of liability creates a culture that isn’t “moving fast and breaking things”.

Expand full comment
2 more comments...

No posts