By Jamiel Sheikh
Artificial intelligence (AI) systems are going to boost profits and create new opportunities for growth in just about every sector and industry. However, the transition will not be seamless. As AI is integrated, challenges will emerge with which businesses must contend. Thankfully, preparing in advance can go a long way toward simplifying the process of AI adoption.
Preparing for Redundancies
It might not be a cheery topic, but the truth is that the automation brought on by AI will make many jobs redundant. One department that could be particularly affected is customer service. As artificial intelligence systems develop, they will increasingly be able to field customers questions.
Already AI systems are answering questions in chat; however, in the future they may reply to emails and even answer phone calls. The challenge will be how to replace hundreds or even thousands of personnel with an AI without creating an uproar. The best solution is most likely a gradual rollout over a number of years, which also has the advantage of giving a corporation time to test and understand how the AI performs.
However, it’s not just customer service that will be impacted by AI. Computer systems may even disrupt the legal department. Automated programs can search legal precedents and create contracts, reducing the number of associates that are needed. Again, the challenge is automating tasks without creating huge gaps in a company’s workforce.
Creating a Collaborative Work Environment
Even when employees are not losing their jobs to AI, they may be placed into a position in which they need to work collaboratively with an AI. This may be difficult at first, as employees are unwilling to defer to the system or use it in an efficient manner.
Companies that integrate AI will have to focus on creating effective training programs to teach their employees how to cooperate with AI. It would be unwise to bring in a new AI and expect employees to work with it perfectly right off the bat.
How to Solve Disputes with AI
As AI becomes more intelligent and increasingly tackles executive-level tasks, there are inevitably going to be disputes. When these arise, how much weight will an AI’s opinion be given when it diverges from that of a manager? What if an AI strongly recommends against a bold move that the CEO has proposed?
If artificial intelligence systems never disagreed with their human counterparts, their worth as high-functioning cognitive systems would be virtually nil. Some dissent is called for and an AI’s fresh perspective on a situation may prove to be very profitable. However, there must be a system in place to help resolve disputes. Ignoring an AI or following it blindly are both inadvisable and companies will need to create a framework so that conflicting opinions do not lead to tension within an organization.
Assigning Responsibility for AI
Perhaps the largest challenge of integrating an AI system into a business will be setting a standard for accountability. Imagine a situation in which a car company develops a self-driving car and sells one hundred thousand models. At some point there is a chance that two of these self-driving cars will crash into each other. Who is to blame? Who can the insurance company and the family hold accountable, especially if there is a fatality?
Or, in a scenario in which an AI writes a contract which later proves flawed and leads to millions in losses, who is to blame? Is it the company that created the AI? The software engineer that works with it? Or the legal department which failed to catch the mistake?
As artificial intelligence systems are non-corporeal and somewhat hard to define, it will be imperative that companies create standards of culpability. Before anything happens, a corporation should have a complete understanding of how blame will be attributed and handled. Corporations which assume that artificial intelligence systems never make mistakes will be in for an unwanted surprise when something goes wrong and everyone is looking for a scapegoat.
Privacy vs. Data Analytics
Finally, there will be the challenge of making use of AI’s tremendous ability to sort through data and predict user behavior, without violating user privacy. For instance, just because an AI can predict from web and lifestyle habits that a customer is taking medication or another one is struggling to become pregnant, that does not mean a company should capitalize on that information for product placement. Further, if a company is in possession of customer data it is their responsibility to protect that data.
As the spate of recent hacks have proven, that may be easier said than done. With that in mind companies ought to think carefully about exactly how much power they want to assign to an AI. Striking a balance between insight and consumer confidentiality will be a challenge, but it’s something that businesses must get right.
About the Author
Jamiel Sheikh is CEO of Chainhaus, an advisory, software development, application studio and education company focused on blockchain, artificial intelligence and machine learning. Jamiel has more than 15 years of experience in technology, capital markets, real estate and management working for organizations like Lehman Brothers, JPMorgan, Bank of America, Sun Microsystems, SONY and Citigroup. Jamiel is an adjunct professor at Columbia Business School, NYU and CUNY, teaching graduate-level blockchain, AI and data science subjects. He runs one of the largest blockchain, AI and data science Meetups in NYC.