Microsoft Copilot is generating real excitement in healthcare.
- Physicians want it for documentation.
- Administrators want it for workflow automation.
- Practice owners see the potential to reclaim hours that currently disappear into administrative overhead.
That enthusiasm is understandable, and in many cases, the productivity gains are real. One health system using Copilot in healthcare operations reported cutting documentation time by 50%.
But here is the part of the conversation that tends to get skipped: most ophthalmology practices are not ready for AI tools yet, and deploying them without the right foundation in place creates serious security and compliance exposure.
This is not a reason to avoid AI. But it is a reason to approach it correctly.
The Gap Between Excitement and Readiness
AI tools like Microsoft Copilot do not create new access to your data. They simply surface what already exists. Microsoft’s own technical guidance is explicit on this point: “Microsoft 365 Copilot honors the existing permissions model in the Microsoft 365 tenant. Copilot only surfaces data that a user is already authorized to access.”
That sounds reassuring until you consider what permissions actually look like inside a typical practice’s Microsoft 365 environment.
- SharePoint sites shared too broadly.
- Documents accessible to staff who no longer need them.
- Folders that were opened up during a busy onboarding period and never locked back down.
These are not hypothetical problems. They are the normal state of most Microsoft 365 tenants that have grown organically over time without deliberate governance. When you introduce an AI assistant into that environment, it does not clean up the mess, but it does make the mess easier to find.
This is something Vertilocity teams frequently see when practices begin exploring AI tools.
“The biggest thing we’re seeing with AI tools like Copilot is that once they have access to your internal data, you have to be sure your permissions are set correctly. Otherwise, people could end up seeing information they shouldn’t have access to,” said Brianne Laffey, Account Manager at Vertilocity.
For an ophthalmology practice handling protected health information, that is not a minor inconvenience; it is a HIPAA risk.
Why Data Permissions Are the Foundation
The core issue is that AI amplifies whatever access structure you already have. For example, if a front desk coordinator has inadvertent access to clinical documentation stored in SharePoint, Copilot can retrieve and surface that information when they ask a question. The AI is not doing anything wrong, as it is doing exactly what it is designed to do. The problem lives upstream, in the permissions model.
The challenge is that many practices know SharePoint permissions are messy, but manual review is slow, disruptive, and easy to postpone—right up until an AI rollout forces the issue.
This is where the right visibility layer changes the timeline. Instead of a site-by-site scavenger hunt, Vertilocity’s SecurePoint IQ is built to provide an environment-wide view of SharePoint access and sharing patterns so teams can find and fix oversharing before Copilot makes it easier to discover.
SecurePoint IQ can support AI readiness by helping you:
- See who has access to what (right now) across SharePoint with centralized visibility.
- Identify oversharing, elevated access, and external sharing risks with permission reporting and alerts.
- Align access to job roles through role-based controls that reinforce least privilege.
- Produce audit-ready documentation with compliance-focused reports that support regulated environments (including HIPAA-aligned needs).
- Reduce risky clutter by flagging duplicates, outdated versions, and unused content that often contains sensitive leftovers.
This work is not glamorous. It is also not optional if you want to deploy AI responsibly.
The Identity and Authentication Problem
Permissions governance is one layer of the problem, authentication is another. AI tools operating in a healthcare environment need to know, with certainty, who is accessing them and whether that access is appropriate in real time.
Multi-factor authentication is the baseline requirement here. Without MFA enforced across your user accounts, an AI tool connected to your Microsoft 365 environment becomes a more attractive target for credential-based attacks. A stolen password that previously gave an attacker access to email now potentially gives them access to a powerful tool that can query clinical and administrative data.
Security professionals see this pattern frequently when organizations introduce new technology into existing environments.
“Even with strong security tools in place, user awareness is still critical. Attackers constantly find new ways to get around technical protections, which is why training and identity security like MFA remain essential,” said Chris Jeanguenat, Network Administrator at Vertilocity.
Microsoft’s security priorities for 2026 reflect how seriously this problem is being taken at the platform level. Phishing-resistant credentials in Microsoft Entra ID and real-time risk adaptation through Conditional Access are central to how Microsoft is building AI security into its identity infrastructure. In fact, identity admins using the Conditional Access Optimization Agent in Microsoft Entra completed Conditional Access tasks 43% faster and 48% more accurately in a recent study. For ophthalmology practices, the practical implication is that your identity and access management posture needs to be solid before AI enters the picture, not patched together afterward.
What “Compliant by Design” Actually Looks Like
Microsoft does provide meaningful compliance infrastructure for healthcare. Dragon Copilot, Microsoft’s AI tool purpose-built for clinical documentation, uses AES-256 encryption for data at rest and TLS 1.2/1.3 for data in transit, and it operates under a Business Associate Agreement that supports HIPAA compliance. Microsoft 365 Copilot uses tenant isolation through Microsoft Entra to prevent cross-tenant data access. These are not marketing claims, but rather documented technical and contractual protections.
The certifications matter too. Dragon Copilot carries HIPAA BAA coverage, HITRUST, ISO 27001, and ISO 42001 certifications. That level of compliance infrastructure is meaningfully different from general-purpose AI tools that lack tenant-specific isolation and PHI-aware controls.
But compliance infrastructure at the platform level only protects what the platform controls. Your practice’s data governance, your user permissions, your authentication policies, your SharePoint structure, all of that sits outside the platform’s compliance boundary. It is yours to manage.
Preparing Your Practice for AI: Where to Start
The path to responsible AI adoption in ophthalmology is not complicated, but it does require working through the steps in the right order.
- Start with an AI readiness assessment. Before any deployment decision, understand the current state of your Microsoft 365 environment: who has access to what, where PHI is stored, how permissions are structured, and whether MFA is enforced across all accounts. This assessment gives you an honest picture of your risk exposure and a clear list of what needs to be addressed.
- Address SharePoint governance specifically. Audit site permissions, remove access that is no longer appropriate, and establish a process for managing permissions going forward. This single step often has the most significant impact on reducing AI-related data risk.
- Enforce MFA and review your Conditional Access policies. If you are not already requiring MFA for all users, that needs to change before AI tools are introduced. Conditional Access policies should reflect the sensitivity of the data your users can reach.
- Apply sensitivity labels to PHI-containing documents. This allows Microsoft 365 to enforce access controls based on content classification, adding a layer of protection that works in conjunction with permissions.
Once these foundations are in place, AI deployment becomes a managed, lower-risk decision rather than a leap of faith.
The Right Sequence Changes Everything
Ophthalmology practices that want to use AI to reduce documentation burden, improve scheduling efficiency, or streamline administrative workflows are not wrong to want those things. The productivity case is real. The compliance infrastructure exists. The technology is capable.
What separates a successful deployment from a compliance incident is the work that happens before the AI is turned on. Practices that invest in governance, permissions, and identity security first will get more value from AI tools and carry significantly less risk when using them.
Need help assessing your Copilot readiness or tightening SharePoint permissions before you turn AI on? Contact Vertilocity to start the conversation.
