The cover of the March 2018 issue of Wired™ shows an injured and bandaged Mark Zuckerberg — a metaphor for the bruising he’s
taken in fighting threats to Facebook’s future. In one counter move, he attempted to sidestep questions of how much responsibility
the company should have for the accuracy of the content carried on its platform.
Difficult questions: How do we balance the
right to free speech with the preservation of democracy, privacy, decency and concern for the protection of individuals?
Is it really the responsibility of social media companies to be the editors of content created by others?
Zuckerberg is said
to be personally concerned with implementing measures to make Facebook communities as representative, civil and trustworthy
as possible1. It’s a good business move for Facebook, because being known for hosting disinformation campaigns negatively
impacts the company in ways that matter:
- Revenue. Advertisers withdraw, objecting to their brands appearing near offensive content.
- Brand. Facebook’s daily active user base in the U.S. and Canada fell for the first time in the fourth quarter
of 2017, dropping to 184 million from 185 million in the previous quarter2.
- Risk. Legislation and regulation may be enacted which seeks to restrict hate speech, misinformation and indecency.
How can trust be restored to the platform, without tweaking
the algorithms to such an extent that users flee the news feed that is no longer relevant or interesting to them?
Intensity Analytics can help in three important ways:
- We differentiate human users from bots by their behavioral signatures. Simply
flagging the content would give users a visual cue that it may be computer-generated and thus not be worth their attention.
- We recognize human users and grant them credentialed status as “verified users,” creating a self-regulating association
of individuals [i.e., The American Bar Association] who agree to submit their work to fact checking by a third party3.
- We add value to the Facebook credential in collaborating with single-sign-on platforms, without impacting the all-important
user experience. We strengthen the weak passwords now in use by adding an individual’s behavioral measurements that happen
silently and naturally, as they type, to confirm it is actually the registered individual person posting and commenting online.
2. For more on this, see: https://www.forbes.com/sites/paularmstrongtech/2017/02/14/facebook-users-posted-a-third-less-content-in-2016-than-in-2015/#3c473e83776d>?
3. See NewsGuard: http://www.publicisgroupe.com/en/news/press-releases/brill-and-crovitz-announce-launch-of-newsguard-to-fight-fake-news-en-1>?