Artificial intelligence (AI) tools are spreading rapidly across workplaces, reshaping how everyday tasks get done. From marketing teams drafting campaigns in ChatGPTArtificial intelligence (AI) tools are spreading rapidly across workplaces, reshaping how everyday tasks get done. From marketing teams drafting campaigns in ChatGPT

The new AI governance model that puts Shadow IT on notice

Artificial intelligence (AI) tools are spreading rapidly across workplaces, reshaping how everyday tasks get done. From marketing teams drafting campaigns in ChatGPT to software engineers experimenting with code generators, AI is quietly creeping into every corner of business operations. The problem? Much of AI adoption is happening under the radar without any oversights or governance.   

As a result, shadow AI has emerged as a new security blind spot. The instances of unmanaged and unauthorised AI use will continue to rise until organisations rethink their approach to AI policy.  

For CIOs, the answer isn’t to prohibit AI tools outright, but to implement flexible guardrails that strike a balance between innovation and risk management. The urgency is undeniable as 93% of organisations have experienced at least one incident of unauthorised shadow AI use, with 36% reporting multiple instances.  These figures reveal a stark disconnect between formal AI policies and the way employees are actually engaging with AI tools in their day-to-day work. 

Here’s how organisations can begin to address the challenge: 

Establishing governance and guardrails 

In order to get ahead of AI risks, organisations need AI policies that encourage AI usage within reason – and in line with their risk appetite. However, they can’t do that with outdated governance models and tools that aren’t purpose-built to detect and monitor AI usage across their business.  

Identify the right framework 

There are already a number of frameworks and resources – including guidance from the Department for Science, Innovation and Technology (DSIT), the AI Playbook for Government, the Information Commissioner’s Office (ICO), and the AI Standards Hub (led by BSI, NPL and The Alan Turing Institute). These resources and frameworks can help organisations b building a responsible and robust framework for AI adoption, and complement international standards from bodies such as The Internet Society (ISO/IEC) and the Organisation for Economic Co-Operation and Development (OECD).  

Invest in visibility tools 

As a business establishes the roadmap for AI risk management, it’s crucial that the security leadership team starts assessing what AI usage really looks like in their organisation—this means investing in visibility tools that can look at access and behavioural patterns to find generative AI usage in every nook and cranny of the organisation. 

Establish an AI council 

With that information in hand, the CISO should consider establishing an AI council made up of stakeholders from across the organisation – including IT, security, legal and the C-suite – to talk about the risks, the compliance issues, and the benefits arising from both unauthorised and authorised tools that are already starting to permeate their business environments. This council can start to mould policies that meet business needs in a risk-managed way. 

For example, the council may notice a shadow AI tool that has taken off that may not be safe, but for which a safer alternative does exist. A policy may be established to explicitly ban the unsafe tool but suggest use of the other one. Often these policies will need to be paired with investment in not only security controls, but also those alternative AI tools. The council can also help create a method for employees to submit new AI tooling for vetting and approval as advancements come to the market. 

By creating this direct, transparent line of communication, employees can feel reassured that they are adhering to company AI policies and empowered to ask questions, while also encouraged to explore new tools and methods that could support growth down the line. 

Update AI policy training 

Engaging and training employees will play a crucial role in getting organisational buy-in to keep shadow AI at bay. With better policies in place, employees will need guidance on the nuances of responsible use of AI, why certain policies are in place and data handling risks. This training can help them become active partners in innovating safely.  

In some sectors, the use of AI in the workplace has often been a taboo topic. Clearly outlining best practice for responsible AI usage and the rationale behind an organisation’s policies and processes can eliminate uncertainty and mitigate risk. 

Governing the Future of AI 

Shadow AI isn’t going away. As generative tools become more deeply embedded in everyday work, the challenge will only grow. Leaders must decide whether to see shadow AI as an uncontrollable threat or as an opportunity to rethink governance for the AI era. The organisations that thrive will be those that embrace innovation with clear guardrails, making AI both safe and transformative. 

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03353
$0.03353$0.03353
-7.57%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Cashing In On University Patents Means Giving Up On Our Innovation Future

Cashing In On University Patents Means Giving Up On Our Innovation Future

The post Cashing In On University Patents Means Giving Up On Our Innovation Future appeared on BitcoinEthereumNews.com. “It’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress,” writes Pipes. Getty Images Washington is addicted to taxing success. Now, Commerce Secretary Howard Lutnick is floating a plan to skim half the patent earnings from inventions developed at universities with federal funding. It’s being sold as a way to shore up programs like Social Security. In reality, it’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress. Yes, taxpayer dollars support early-stage research. But the real payoff comes later—in the jobs created, cures discovered, and industries launched when universities and private industry turn those discoveries into real products. By comparison, the sums at stake in patent licensing are trivial. Universities collectively earn only about $3.6 billion annually in patent income—less than the federal government spends on Social Security in a single day. Even confiscating half would barely register against a $6 trillion federal budget. And yet the damage from such a policy would be anything but trivial. The true return on taxpayer investment isn’t in licensing checks sent to Washington, but in the downstream economic activity that federally supported research unleashes. Thanks to the bipartisan Bayh-Dole Act of 1980, universities and private industry have powerful incentives to translate early-stage discoveries into real-world products. Before Bayh-Dole, the government hoarded patents from federally funded research, and fewer than 5% were ever licensed. Once universities could own and license their own inventions, innovation exploded. The result has been one of the best returns on investment in government history. Since 1996, university research has added nearly $2 trillion to U.S. industrial output, supported 6.5 million jobs, and launched more than 19,000 startups. Those companies pay…
Share
BitcoinEthereumNews2025/09/18 03:26
Trump Reviews Candidates to Succeed Fed Chair Powell

Trump Reviews Candidates to Succeed Fed Chair Powell

The post Trump Reviews Candidates to Succeed Fed Chair Powell appeared on BitcoinEthereumNews.com. Key Points: Trump evaluates Fed Chair candidates, considering
Share
BitcoinEthereumNews2025/12/19 08:34
Will XRP Price Increase In September 2025?

Will XRP Price Increase In September 2025?

Ripple XRP is a cryptocurrency that primarily focuses on building a decentralised payments network to facilitate low-cost and cross-border transactions. It’s a native digital currency of the Ripple network, which works as a blockchain called the XRP Ledger (XRPL). It utilised a shared, distributed ledger to track account balances and transactions. What Do XRP Charts Reveal? […]
Share
Tronweekly2025/09/18 00:00