This article provides an overview of privacy controls for consumer AI tools and ºÚÁϳԹÏÍø-licensed tools.

When Boston College faculty, staff, and students use generative AI tools, the most secure approach is to use one of the ºÚÁϳԹÏÍø-licensed tools - Google Gemini, Google NotebookLM, and Microsoft Copilot. When you use your ºÚÁϳԹÏÍø account with these tools, your data is NOT used to train the AI models, and you have the highest level of data protection.

If instead you opt to use OpenAI's ChatGPT, Anthropic's Claude, or consumer versions of the above tools, it is crucial to understand how they handle your data. Be aware of the settings to protect sensitive or confidential information.

Consumer AI Tools

While ºÚÁϳԹÏÍø's licensed tools offer robust protection, people may opt to use the free, public versions of other AI services. It is essential to understand the data policies for these services.

ChatGPT (OpenAI)

ChatGPT defaults to using your chats for model training. To prevent this, you must proactively adjust your account settings.

  • Change your settings:
    • Click your profile icon in the bottom-left corner.
    • Go to Settings > Data Controls.
    • Toggle off the setting labeled "Improve the model for everyone."

Claude (Anthropic)

In September 2025, Anthropic updated its policy for consumer accounts. Your data may be used for model training unless you opt-out.

  • Change your settings:
    • Go to .
    • Click your profile icon in the bottom-right corner.
    • Navigate to Privacy Settings.
    • Turn off the toggle labeled "Help improve Claude."

Taking a few moments to review these policies and adjust your settings for any third-party AI tools will help you maintain control over your data.

ºÚÁϳԹÏÍø-Licensed Tools

The easiest, most secure way to leverage generative AI tools is to use your ºÚÁϳԹÏÍø account for Gemini, NotebookLM, or Copilot. There are no settings to change. Your data is always protected.

Gemini and NotebookLM (Google)

When accessed with your ºÚÁϳԹÏÍø Google account, your interactions with Gemini for Google Workspace and NotebookLM are governed by a specific agreement that protects your data.

  • Your data is not used for model training. Your conversations with Gemini and the documents you upload to NotebookLM are kept within the ºÚÁϳԹÏÍø domain and are not used to train Google's generative AI models.
  • 72-hour retention. For security and service purposes, your activity with these tools is saved for up to 72 hours. After that, it is automatically deleted.
  • No setting required. This data protection is a benefit of the university's license and is enabled by default for all ºÚÁϳԹÏÍø accounts.
    Ìý

Microsoft Copilot chatbot

When you use your ºÚÁϳԹÏÍø-licensed Microsoft account, Microsoft Copilot also operates with enterprise-grade data protection.

  • Your data is not used for model training. The prompts you enter and the responses you receive are not used to train the underlying AI models.
  • Data remains within your organization. Your chats and uploaded files remain within the Microsoft 365 service boundary and are not shared with other customers.
  • Look for the green shield. When using Microsoft Copilot, a green "Protected" shield icon in the top-right corner indicates that enterprise data protection is enabled.
Back To Top