3 Comments

Now that the Supreme Court decisions are made, legislators can draft new “Supreme Court-proof” policy. Here’s an idea - what do you think?

Social Media algorithms have “learned” that hate speech, and negative words make money for the company. That technology could be programmed to balance for hate and negativity by searching for and amplifying counterpoints which are underexposed, due to the bias in Social Media AI, based on a metric I call “real time reach”.

To incentivize balance, the Social Media companies should report results for issues exceeding a “real time reach” of 15% to the FCC. When balance is not achieved, the Social Media Company will not be protected by the Section 230 liability exemption for related damages.

Having worked in media that is not exempt from liability, in my experience, fear of liability creates a culture that isn’t “moving fast and breaking things”.

Expand full comment

(One mans thoughts, freely given and worth ALMOST that much.)

We are at the start of a New World. It started when 2 things happened close together. The Fall of the Soviet Union & that the rise of The Little Silicon Chip (The Net). This brought Chaos, and we are still trying to figure out What To Do.

As for Big Social Media (Section 230). Make them choose, Platform or Publisher? They are saying they are one thing and acting like another.

Expand full comment

I agree, and that’s a simpler solution. But it seems the Supreme Court doesn’t, right? Or did I misinterpret this last ruling.

Expand full comment