Date: January 16-17, 2026
Location: Pune, Maharashtra, India
Format: Presentation
Slides
Overview
Artificial Intelligence is no longer a background helperβit has become a participant in our software supply chains. Tools like Cursor, Windsurf, and other AI-powered IDEs can now generate code, recommend dependencies, suggest infrastructure, and even automate deployments. On the surface this looks like productivity magic. Under the hood, it is a supply chain disruptor.
This talk explores the emergence of “Shadow AI” as a new vector in software supply chain security, examining how AI-powered development tools introduce new risks and how organizations can address them.
Key Topics
Shadow AI as a supply chain disruptor
AI-powered IDEs and their impact on code generation
Risk classification for Shadow AI
Discovery and inventory of AI usage in production
Incentive loops and governance approaches (Carrots and Sticks)
Operational considerations for managing Shadow AI
About Cyfinoid Research
Research focused cybersecurity firm
Major focus areas: Software Supply Chain, Cloud Security
Website: https://cyfinoid.com/
AI Generated Summary
AI Generated Content Disclaimer
Note: This summary is AI-generated and may contain inaccuracies, errors, or omissions. If you spot any issues, please contact the site owner for corrections. Errors or omissions are unintended.
This presentation at Identity Shield in Pune addresses Shadow AI as the newest and most disruptive force in software supply chain security. Anant Shrivastava examines how the rapid proliferation of AI tools across organizations β often adopted without IT oversight β creates a new class of shadow technology that intersects with traditional shadow IT risks. The talk covers the changing landscape, the various forms of shadow technology, the production implications, risk classification frameworks for Shadow AI, and a governance approach using both incentives (“carrots”) and enforcement (“sticks”) to bring unsanctioned AI usage under organizational control.
Key Topics Covered
The Changing World:
AI adoption has fundamentally changed how software is developed, deployed, and consumed
The pace of change means traditional governance and security models are unable to keep up with how AI tools are being adopted across organizations
Shadows of All Kinds:
Shadow AI exists alongside other shadow technology categories: Shadow IT (unsanctioned infrastructure), Shadow Data (unmanaged data flows), and Shadow SaaS (unauthorized cloud services)
AI tools amplify all existing shadow categories by making it easier for anyone in the organization to build, deploy, and expose applications and services
Production Implications:
Shadow AI tools can introduce unvetted code, models, and dependencies into production environments
Without governance, AI-generated outputs may bypass code review, security testing, and compliance checks
The security team may not even be aware of AI tools being used, let alone what they produce
Current Approaches β Insufficient:
Existing methods for handling Shadow IT (blocking, monitoring, policy enforcement) are not keeping pace with the speed and diversity of AI tool adoption
Traditional security controls were designed for a world where only technical staff built and deployed software
Inventory as the Foundation:
Discovering and cataloging AI tools in use across the organization is the essential first step
Without inventory, organizations cannot assess risk, enforce policy, or respond to incidents involving AI-generated artifacts
Operational Loop for Shadow AI Governance:
Continuous discovery, assessment, classification, and remediation cycle
Must operate at the speed of AI adoption, not at the pace of traditional IT governance
Discovery Angles:
Multiple vectors for discovering Shadow AI usage: network traffic analysis, endpoint monitoring, procurement and expense tracking, SSO/identity federation logs, and direct team engagement
Shadow AI Risk Classification:
Structured framework for categorizing Shadow AI risks by severity and impact
Classification enables proportional response rather than blanket prohibition
Incentives and Enforcement (Carrots and Sticks):
Prohibition alone drives Shadow AI further underground β organizations need to provide sanctioned alternatives that meet user needs
Carrot: provide approved AI tools, training, and support; make the sanctioned path easier than the unsanctioned one
Stick: enforce policy through access controls, monitoring, and consequences for non-compliance
Incentive Loop:
Creating a positive feedback cycle where sanctioned AI usage is easier, faster, and better supported than shadow alternatives
Users who adopt sanctioned tools get better support, more features, and organizational backing
Rethinking Your Role:
Security and identity professionals must evolve their role from gatekeepers blocking AI adoption to enablers who provide secure, governed AI pathways
The goal is not to stop AI usage but to make it visible, assessed, and managed
Actionable Takeaways
Start with inventory: discover what AI tools are actually being used across your organization through network monitoring, endpoint analysis, expense tracking, and identity/SSO logs.
Classify Shadow AI risks using a structured framework β not all Shadow AI usage carries the same risk, and proportional responses are more effective than blanket bans.
Provide sanctioned AI alternatives that are genuinely useful β if the approved tools are harder to use than unsanctioned ones, Shadow AI will persist regardless of policy.
Build a continuous operational loop: discovery, assessment, classification, and remediation must run at the speed of AI adoption, not traditional IT governance cycles.
Use both carrots and sticks: incentivize sanctioned AI usage through better support and capabilities while enforcing policy through monitoring and access controls.
Evolve your security role from gatekeeper to enabler β the organizations that govern AI usage effectively will be those that make secure AI adoption the path of least resistance.
Recognize that Shadow AI is a supply chain problem: unvetted AI tools introduce unknown dependencies, unreviewed code, and unassessed models into your organization’s software supply chain.