Artificial Intelligence (AI) will revolutionize the world and the role of humans in it even more than the industrial revolution did. AI will also transform human consciousness more than any movement since the Enlightenment, and arguably will go much further. The main problem that plagues society is one that has prevailed since time began: selfishness.
AI is progressing at a tremendous rate and it is being implemented faster than we humans can comprehend or cope with the consequences. AI developers rush to implement their algorithms with blinding speed because they fear being undermined by competition or losing a competitive edge if they take the time to conduct comprehensive testing before deployment. Innovation and competitiveness are not evil concepts because they lead to the creation of some of the greatest accomplishments of humankind. However, the rush to implement and test in the field without considering the consequences is selfish. And who takes the blame when an AI system fails?
I won’t go into the well-known details, but in light of selfishness, consider the death of the Arizona pedestrian, Elaine Herzberg, when an autonomous Uber car struck her. The most troubling aspect of the video revealed that the car did not attempt to swerve to miss the woman -- and she was a large, moving person walking a bicycle across the street in an open an uncrowded area.
Why did the AI not swerve into the empty lane to attempt to avoid the collision? Was this an AI algorithm failure? Driverless cars are fitted with a system of cameras, radar, and Lidar sensors that detect traffic, pedestrians, and other objects, day or night. Here is where the problem of selfishness is evident. A human driver would have, most likely, swerved, even into oncoming traffic, to avoid hitting a pedestrian. A human driver would’ve mowed down a hedge, jumped the curb, or even collided with a tree to avoid running over a woman and her bicycle. Yet, the Uber car, never slowed down or veered, but, plowed along, eerily, straight forward, even after impact.
AI is a learning collection of algorithms, but it has no sense of ethics or moral choice without that being programmed into its calculations. In other words, the autonomous driver is only as good as its programmer. AI agents do not have the judgment ability of a human in exigencies and unexpected situations. A simplified version, with algorithmic math removed, of the Uber incident may have looked like:
disobey traffic laws = avoid object
obey traffic laws = collide with object
Command: Laws should be obeyed
Σ = Collide with object
The Uber vehicle arguably did what was simplest for itself rather than what was best for the pedestrian. Was this an AI programming code failure, or strictly conceived, a programming code success? The revealing of this selfishness flaw in the AI is why Uber quickly suspended the use of autonomous drive mode for further study.
The public relations problem for Uber is not only one of reputation and public confidence. It has now also become a problem of disrespect for publics by revealing that we are all guinea pigs in Uber’s AI test. And the test wasn’t conceived with regard to ethics.
The curtain has been opened and the wizard revealed. The AI’s flaw of selfishness is now on display for everyone to see. And more importantly, in public relations terms, the selfishness and hubris of Uber’s management is on display for all to see.
Any time an AI agent is programmed with self-awareness, self-preservation, or a survival mode, it will act selfishly. Selfishness becomes mathematically predictable. This problem is what moral philosophers have been trying to rule out ever since moral philosophy began. Not only is selfishness a problem in human nature, the Uber incident reveals that it is a problem in machine programming, as well. Ethics argues that the root of almost every unethical decision begins with selfishness. Selfishness is the bias that warps the playing field. Selfishness corrupts all ethical behavior.
As more companies become involved in AI and PR professionals are expected to use AI data, advise on its use, develop policy around it, advise the CEO and clients on it, and comment publicly on it, we should examine AI’s implications. How can we de-program selfishness from AI? And how can we get our management and clients to see the potential problems posed by selfishness? And how can we ensure that all variables should be equally weighed by logic, not selfishness? We could use the Uber case (and its undisclosed settlement and seemingly unending investigations) as one example.
In the coming AI revolution, economies of scale will be drastically altered, hundreds of thousands of people will be displaced from the workforce, and our very conceptions of privacy -- and apparently safety -- will change. All of these implications must be explored in detail before we allow AI free reign in the real world. AI will redefine our very role in the world just as the industrial revolution moved society from agrarian (farming), rural communities of extended families to cities based on mass production and transport, with nuclear families and working individuals. This time, however, the stakes are higher. When AI can program itself and truly becomes smarter than we are, who is to say that selfishness will not rule its decisions?
Shannon Bowen is a professor at the University of South Carolina and a member of the board of trustees of Page and the board of directors at the International Public Relations Research Conference. She can be reached at firstname.lastname@example.org.