Auburn University Harbert College of Business Logo
Harbert Magazine
Harbert Magazine
The risks of having both

Business ethics involves principles, values, and norms embedded in organizations from individuals, codes of ethics, and the legal system. These stem from human behavior and experience. 
Artificial Intelligence (AI) can simulate some aspects of human behavior, such as cognitive functions associated with humans. In many cases, its problem solving and learning can handle complexity beyond what a human being can process. 
But what about ethics? The ability of AI to incorporate ethical decision-making has been challenged. How should AI make ethical decisions? What is a responsible role for AI in our society?
Big data enables AI to engage in advanced analytics combined with cloud computing to create “smart” systems to interact and provide information and solutions in business. There is the potential for AI to disrupt and change all aspects of business.
Introducing AI requires public safety, security, and privacy as well as building trust and understanding. Values, norms, and behavior relate to social and cultural human decisions. As AI systems become more complex, there is a need to explore the ethics-related impact and develop ethics components to machine learning.
Ethics concerns exist because of the ability to make autonomous decisions. AI can make decisions and implement actions based on the logarithms or rules provided by the programmers. For example, there are decisions about what kind of decision parameters should go into drones, robots, and autonomous cars. Does a driverless car protect the driver and passengers, first and foremost, or does the decision criteria favor the potentially more vulnerable pedestrian? In addition, an AI program will need oversight to monitor and assess outcomes from decisions or directions from machine learning.
There is evidence that AI systems have been involved in accidental or in some case intentional ethical dilemmas that could have major consequences. Targeting markets based on demographics could even result in discrimination. Predictive analytics can target market segments, but there can be data privacy concerns. This will require the design of AI systems programmed with an ethical decision-making component.
This situation will require machine learning with the ability to make ethical decisions consistent with the organizational ethical culture. In other words, this will require complex algorithms that are similar to the ethical decision-making of their human partners. In the future, humans will be working alongside AI-enabled robots, drones, and other devices and will depend on these machines to help maintain ethical organizational cultures. 
AI systems that can think like humans will need to make ethical decisions. This will require effective ethics and compliance systems programed into AI devices. All of the risk areas associated with AI decisions have to be monitored and compared to the ethical standards and legal requirements of the organization. This will require identification of issues and risks associated with AI. Most organizations are governed by ethical values, codes, and compliance. The same will be required by AI.
The development of an independent code of ethics for specific AI applications will be needed. General value statements used in organizational codes of ethics may be too broad to provide directions. AI initially will rely more on compliance algorithms. For example, directions on privacy will be a key issue. Should AI-enabled drones scan to identify individuals and their behavior? 
Methodologies will need to render decisions in the same way humans make ethical decisions. AI will not only be concerned about ethical decisions. Legal compliance also must be built into machine learning. Boundaries will need to be imposed to address laws, regulations, and other requirements. Ethics will be a buffer that develops areas such as industry self-regulation and core practices that meet societal expectations. 
As AI advances, there may be new laws to protect consumers and employees. Laws promoting equity and safety as well as competitive relations may be needed. Microsoft was the first tech company to call for regulation of facial-recognition technology. A key challenge will be transparency about how algorithms work to make ethical decisions. Already, some systems have been found to discriminate against African-Americans and Hispanics. AI is being used to observe cities in China. 
There will need to be ways to integrate AI decision-making with human assistance. For example, autonomous cars may have to decide what to do in an emergency situation. In the early stage of AI ethics, a control component where a human can assist in the decision may be needed. This could help transition ethical decisions in AI until it could become better integrated into decision processes. 
Developing AI for the common good of society should be the objective. From diagnosing cancer to performing high-risk jobs, AI has the potential to make the business world more responsible and accountable. AI for the common good should be focused beyond individual and business interests. In a way, AI may change relationships with many stakeholders who have an interest in the company. Therefore, AI must operate to understand the impact on a firm’s social responsibility. There will be a need to look beyond just the impact on internal organizational, legal, and ethical concerns, but also to issues such as sustainability, consumer protection, employee welfare, social issues, and even corporate governance. 
AI enabled by blockchain has the potential to improve ethics. Blockchain is a series of blocks of information that record ordered transactions and data. This information system is decentralized and distributed on a peer-to-peer network. No one can change the history or data to take advantage of others. This immutable audit trail means financial transactions will be less susceptible to fraud. In accounting and financial reporting, there will be a permanent record and identification of who made the entries. Carrefour SA and Walmart are already using blockchain to improve food safety. Through blockchain, suppliers have the ability to quickly trace food that can cause health dangers, which can save lives. 
At this point in its development, AI cannot internalize the ability of humans to make decisions. We are in the early stage of identifying the issues and solutions to incorporating AI into business and society. Because the benefit of AI is so great, the ethical challenges need to be resolved to create integrity in this powerful technology.
O.C. Ferrell
James T. Pursell Sr. Eminent Scholar
Director of the Center for Organizational Ethical Cultures