It is becoming difficult to distinguish between fake and real identities. Recently, I’ve been exposed to a series of artificial intelligence generated videos and audio material. Most of them are designed to deceive and they are catching people unawares. One of them claims to be an investment scheme by a global businessman as reported by a fake video of a newsreader.
It’s surprising that even intelligent beings have been caught off guard by such a scheme. It seems to me there’s little being done to safeguard consumers of video and audio content from the deluge of fake content.
What should be done? To determine what should be done, it’s important to consider what could go wrong. Imagine something that appears to be a news report spreading falsehoods about a leading figure in elections. Imagine a large number of people believing these falsehoods to an extent that it influences their decisions.
Imagine an audio message that spreads across a town about a false imminent threat resulting in people changing their plans to move around even if it’s for a day. All of these scenarios are a possibility if nothing is done.
The required action can no longer just be limited to individuals. Of course, each person has to carry their own load. The nature of damage that could be caused by fake content is such that there’s a need for an operating system level of intervention.
By operating system, I mean that there’s a need for device manufactures to intervene with the computer machines that they provide to the public. It may also be necessary for users of online platforms to have a registered identity. Once users are registered it may also be necessary for an indicator that content is authentic to signal to other users that the information can be trusted.
The current situation with fake content also creates an opportunity for tech businesses to create another layer with their businesses which can guarantee authentic information. In the near future we may be introduced to new types of browsers that can detect false content. If this becomes a reality we will be required to pay a premium to access authentic information online. A price tag alone, however, may not be enough to curb misinformation. A more stringent online environment may be required to ensure accountability. At the end of the day there will be multiple environments online. A more open and less reliable environment and a closed, more reliable environment will be on the offering.
These interventions may take time before they become a standard. For now it may be necessary for the digital sphere to have more organisations that are dedicated to separating fact from fiction.
At some level online platforms will have to take a collective responsibility to keep the online environment clean. Already in some countries there’s more being done to limit harm. In view of current developments which include rampant misuse, researchers and ethicists have attempted to lay down rules for AI, including the 2018 Montreal Declaration for the Responsible Development of Artificial Intelligence and the 2019 Recommendation on Artificial Intelligence from the Organisation for Economic Co-operation and Development.
An initiative called the Partnership on AI, a non-profit organisation that includes major industry partners, fosters dialogue on best practices, although some observers and participants have questioned whether it has had any impact.
Countries will have to create dedicated entities that are dedicated to detecting fake information. All of these efforts together with increased awareness about fake content may just save people from dangerous online scams.
Wesley Diphoko is the Editor-In-Chief of FastCompany magazine
BUSINESS REPORT