free hit counter Opinion: Ethics of Artificial Intelligence - Pelican PMS

Opinion: Ethics of Artificial Intelligence

We have moved from the times when technology was used for convenience to a stage where we are developing machines that can replace humans. Artificial Intelligence (AI), machine learning and deep learning are popular buzzwords in the technology realm. Each day, Geeks are finding new ways to implement AI and further develop their products. Giving machines the power to learn and think like humans has its risks. At such a time, it is appropriate for non-technical considerations to influence technical decisions. Machines make decisions based on optimal solutions; there is no humane thought or empathy behind it. Hence in performing such tasks there is a seldom addressed grey area – Ethics.

We must be cautious while threading into the unprecedented implementation of this powerful technology. The greatest fear regarding the negative implication of AI is one that we might not experience anytime but is believed to be a possibility– the fear that machines become ‘super intelligent’—wherein they grow a conscience and develop runaway tech to overpower humans. Hence, there is a need for a strong set of ground rules or ethics that are upheld while implementing AI for different purposes.

Two major applications of AI that have the potential to revolutionize their respective industries are the application of AI in finance and the use of AI in automated machines.

AI in Finance

A recent report by the Finance Innovation Lab does an effective job in summarizing the impact of the use of AI in the financial services sector. As per the report, it is evident that AI has shown immense potential to help financial services institutions work for customers in a number of ways: it has been recognized as an effective way to help people make sense of their financial habits based on their financial transaction data and market information and suggest the best options available for them. This could help fill the advice gap by offering people insights, recommendations and advice regarding their finances, at scale and in an affordable way. By tracking patterns of behaviour, AI could make it easier to identify people who need help with their finances before a crisis, so that they or other organizations can take pre-emptive action to support them. AI also automates actions that serve our best interests such as transferring money across accounts to avoid overdraft fees and switching to better providers and products. Automation may also be helpful for people with mental health conditions who experience a lack of control over their spending habits and want to pre-commit to certain behaviours. In a broader sense, AI can drive competition in a way that rebalances power between customers and the finance industry.

However, it is undoubted that these opportunities come along with a number of risks. For example, customers may not have access to the AI-driven insights from data, which leads to information asymmetry to the advantage of the industry over customers. Moreover, Fintech startups that develop AI-powered financial services are profit driven and would identify and exploit customers’ behavioral biases. Another major risk is that current modeling techniques require industry experts to modify algorithm parameters and edit datasets. If AI is used for complete automation and algo trading then people no longer have control over the decision making, which could lead to severe meltdowns during times of financial distress. In general, it is likely that businesses will not fully understand the complete capabilities of technology they develop and implement. Additionally, both, businesses and regulators might lack the skills necessary to investigate the technology, especially since regulators that play a key role are generally experts in economics rather than technology. These risks boil down to the fundamental problem that technology, when introduced, might seem enticing but over time could lead to significant issues, which would be irreversible.

AI in self-driving cars:

Companies like Tesla and Uber have been in the spotlight due to crashes of their self-driving cars. While self-driving cars might face minimal difficulty maneuvering roads on a daily basis, it is such incidents that make us doubt an AI’s decision-making ability, thus attracting attention towards the ethics behind the use of AI for self-driving cars. Self-driving cars face three key ethical issues regarding humanity, accountability and privacy:

Our economic system is based on compensation for our contribution to the economy in the form of wages. However, companies can use AI systems to reduce dependence on the human workforce to cut costs thus leading to mass unemployment. This applies to self-driving cars that pose a threat to the jobs of taxi drivers. While the purpose of technology is to serve people to make their tasks easier, AI systems put peoples’ livelihood at risk. Some might argue that the implementation of AI technologies can also create new, better jobs. However, the scale of job role transfers in vague and hence during such times, the march of progress must be slowed to accommodate such humane concerns.

While unemployment posing a threat to humanity is a large-scale issue, accountability is an ethical issue specific to incidents. When there is a car crash, the driver that caused the accident is held accountable. If there is a crash involving two cars and the victim dies, the victim’s family seeks retribution from the law that punishes the driver at fault. But who would be held accountable when there is a crash caused by a self-driving car? It is most likely that the car manufacturer will take liability and pay for damages, as the cost of a few crashes would not outweigh the profitability of self-driving cars. But if there is a fatal crash, then it does not seem ethical to pay money to compensate for the life lost due to a technological error. From a technological standpoint, it is important to keep in mind that during the process of machine learning, the machine is taught how to learn from large datasets. But the logic behind the decision-making through deep learning cannot be comprehended by anyone. It is unclear as to whether it is safe to trust AI systems to perform tasks where there are human lives at stake, especially when we do not know the logic behind the machine’s decision-making ability.

Self-driving cars travel solely using data. This includes data from GPS and data from sensors surrounding the car. The system has sufficient information on the car’s whereabouts and this data will become the most valuable commodity in the driverless car revolution. Ethics would dictate that individuals must have absolute control to not have data sharing beyond the operation of the vehicle. But with the potential profitability behind such data, it is likely that companies might take advantage of this data, thus posing a threat to privacy.

AI technologies are undoubtedly revolutionary with the services it provides but we must fear the negative implications that could be caused by their introduction into markets. AI is being increasingly implemented in multiple fields and it would be best if governments soon introduce policies that address the issues highlighted. Business leaders, engineers and policymakers that make significant decisions regarding the introduction of such technologies are responsible for deciding the fate of this situation and hence while making decisions, they must contemplate fairly between the benefits to individuals versus protecting the welfare of a community.