Artificial Intelligence (AI) has practically permeated every aspect of our lives. The workplace is no exception. With the aim of boosting productivity & letting employees focus more on the important stuff, organizations from a wide variety of sectors in Australia have embraced AI into their workflow. The rapid progress and scale of development have further expedited this trend. However, not every AI application or tool is approved by the employer. The matter enters the murky waters of Shadow AI when employees access these unauthorized tools.
What is Shadow AI?
Simply put, Shadow AI is the unsanctioned use of AI applications or tools by an employee without explicit approval from the HR department. While the matter may seem insignificant at first, on closer inspection, it can cause real damage to the organization. Shadow AI is an emerging subspace within the practice of ‘Shadow IT’ wherein employees use unauthorized IT tools or software.
How does Shadow AI emerge?
The practice of using unsanctioned tools can arise due to a multitude of factors:
- Deadline Pressure:
Employees might be getting too uncomfortably close to their project or task deadlines. In such situations, they may engage with AI to find an easy way out of a tough spot.
- Ambiguous Policies:
AI is a rather recent phenomenon. A couple of years ago, these advanced tools were not massified and were restricted to research labs. Due to this, many companies still do not have a clear policy on AI use in the workplace. In fact, a survey found that only 19% of IT and HR departments have a framework in place for AI governance. As a result, employees may not even know that their actions are not as per the company regulations. To solve this, companies should craft policy statements where the dos and don’ts are very clearly stated to avoid confusion.
- Easy Accessibility:
AI tools are now very accessible. The user interface usually tends to be very navigable and friendly. Almost no advanced skills are needed to properly leverage these tools (except learning the art and science of prompting). This makes using AI a very attractive prospect.
- Enhanced Productivity:
It is no secret that using AI gives a real boost to productivity. It allows employees to do more work on time. Even if the organisation may allow certain tools, they might not be the ones the employee is used to. This friction of learning a new system can nudge people towards their preferred tool, even if it may not be allowed by the HR department.
What are the risks of Shadow AI?
Unsanctioned use can have quite dire consequences for the company. Here’s how:
1. Reputation Erosion
There are still plenty of customers who don’t prefer their work to be done by AI. Infact, its inclusion in the final output may even have a direct impact on the purchase intention. In such a scenario, when employees would use their own tools and if this was discovered by the client, it could lead to a potential loss of business. For instance, in a digital marketing agency, if the Content Manager used an AI image generator to make blog illustrations despite the client having an aversion to the same, it could lead to a severing of business ties.
2. Impact on quality of work
While a good amount of progress has been already made, it is important to remember that we are dabbling with a technology that is still in its infancy. AI models are peppered with their own biases This can have its own detrimental impact on the final work, if not curtailed. Additionally, the work of these models which the HR department does not vet may not align with the objectives, values and ethical code of the company. This could cause dissonance in the work and the company’s stand on certain aspects like ethics and aims. Finally, the model preferred by the employee may also suffer from issues like overfitting wherein the model tends to mimic the characteristics of the training dataset in the final output, even if the input is vastly different from the training dataset.
3. Security Breach
HR departments usually focus heavily on the security level of an AI model before approving it. An employee might not place it at the top of his list when selecting the model. This is a severe cause of concern as the employee is likely to input proprietary and sensitive corporate data into an AI model that might not have the best security measures. A 2024 survey found that 20% of data leakage actually happened due to the staff using unvetted GenAI tools.
How can companies curb Shadow AI?
- Create a comprehensive policy:
As stated before, the lack of company policy is one of the primary reasons behind the rise of Shadow AI. A great starting point would be creating an AI governance framework with the help of their in-house Masters in HR professionals.
- Utilize IT infrastructure:
HR can also explore using IT tools like a firewall or network monitoring tool to block the use of unauthorized platforms and tools. Putting such guardrails can significantly reduce the chances of using AI at work.
- Give AI safety training:
HR can also provide a detailed AI safety training session to educate the employees about the perils of using unauthorized tools. It is possible that the employees may not understand the gravity of this error. Providing training could rectify the same.
- Boost collaboration:
A closer collaboration between the IT, HR, Operations and business unit teams can lead to an open dialogue about the prevailing issues.
- Simplify Approval Process:
Give employees an option to get approval for their preferred AI tool. The HR department can conduct its security and feasibility checks and give a final decision. If rejected, it is important to convey the same promptly with the right reasons explained in detail.
As the world grapples with this emerging technology, it is vital that companies learn how to cope with the challenges and pitfalls of using AI. Choosing the right systems and processes in place can go a long way in improving the quality of work.