Skip to content

Use of Generative AI

Cybersecurity Policy for the use of Generative AI

Effective Date: March 1 2024

1.0 Purpose

The purpose of this policy is to establish guidelines and best practices for the responsible and secure use of generative artificial intelligence (AI) within our organization. Generative AI refers to technology that can generate human-like text, images, or other media content using AI algorithms.

2.0 Scope

This policy applies to all employees, contractors, and third-party individuals who have access to generative AI technologies or are involved in using generative AI tools or platforms on behalf of our organization.

3.0 Acceptable Use

General Guidelines

DO:

  • Understand that GenAI tools may be useful but are not a substitute for human judgment and creativity.
  • Understand that many GenAI tools are prone to “hallucinations,” false answers or information, or information that is stale, and therefore responses must always be carefully verified by a human.
  • Treat every bit of information you provide to an GenAI tool as if it will go viral on the Internet, attributed to you or the Company, regardless of the settings you have selected within the tool (or the assurances made by its creators).
  • Inform your supervisor when you have used a GenAI tool to help perform a task.
  • Verify that any response from an GenAI tool that you intend to rely on or use is accurate, appropriate, not biased, not a violation of any other individual or entity’s intellectual property or privacy, and consistent with Company policies and applicable laws. Since general purpose GenAI tools are often trained on data whose provenance you cannot verify, this process should be extensive.

DO NOT:

  • Do not use GenAI tools to make or help you make employment decisions about applicants or employees, including recruitment, hiring, retention, promotions, transfers, performance monitoring, discipline, demotion, or terminations.
  • Do not upload or input any confidential, proprietary, or sensitive Company information into any GenAI tool. Examples include passwords and other credentials, protected health information, personnel material, information from documents marked Confidential, Sensitive, or Proprietary, or any other non-public Company information that might be of use to competitors or harmful to the Company if disclosed. Doing so may breach your or the Company’s obligations to keep certain information confidential and secure, risks widespread disclosure, and may cause the Company’s rights to that information to be challenged.
  • Do not upload or input any personal information (names, addresses, likenesses, etc.) about any person into any GenAI tool.
  • Do not upload or input any information of any kind about any customer or any customer’s customers/guests into any GenAI tool for any reason.
  • Do not represent work generated by a GenAI tool as being your own original work.
  • Do not integrate any GenAI tool with internal Company software without first receiving specific written permission from your supervisor and the Engineering and Security Departments.
  • Do not use GenAI tools other than those on the approved list from the Engineering and Security Departments. Malicious chatbots can be designed to steal or convince you to divulge information.

3.1. Authorized Use

Generative AI tools and platforms may only be used for business purposes approved by the organization. Such purposes may include content generation for marketing, product development, research, or other legitimate activities.

3.2. Compliance with Laws and Regulations

All users of generative AI must comply with applicable laws, regulations, and ethical guidelines governing intellectual property, privacy, data protection, and other relevant areas.

3.3. Intellectual Property Rights

Users must respect and protect intellectual property rights, both internally and externally. Unauthorized use of copyrighted material or creation of content that infringes on the intellectual property of others is strictly prohibited.

3.4. Responsible AI Usage

Users are responsible for ensuring that the generated content produced using generative AI aligns with the organization’s values, ethics, and quality standards. Generated content must not be use if it is misleading, harmful, offensive, or discriminatory.

4.0 Access and Security

4.1. Authorized Access

Access to generative AI tools, platforms, or related systems should be restricted to authorized personnel only. Users must not share their access credentials or allow unauthorized individuals to use the generative AI tools on their behalf.

4.2. Secure Configuration

Generative AI tools and platforms must be configured securely, following industry best practices and vendor recommendations. This includes ensuring the latest updates, patches, and security fixes are applied in a timely manner.

4.3. User Authentication

Strong authentication mechanisms, such as multi-factor authentication (MFA), should be implemented for accessing generative AI tools and platforms. Passwords used for access should be unique, complex, and changed regularly.

4.4. Data Protection

Users must handle any personal, sensitive, or confidential data generated or used by generative AI tools in accordance with the organization’s data protection policies and applicable laws. Encryption and secure transmission should be employed whenever necessary. Inputting sensitive, or confidential organization data into an online AI prompt is prohibited. A DLP (Data Loss Protection) solution should be implemented and used to stop data leaks from AI.

5.0 Monitoring and Incident Response

5.1. Logging and Auditing

Appropriate logging and auditing mechanisms should be implemented to capture activities related to generative AI usage. These logs should be regularly reviewed to detect and respond to any suspicious or unauthorized activities.

5.2. Incident Reporting Any suspected or confirmed security incidents related to generative AI usage should be reported promptly to the organization’s designated cybersecurity team or incident response personnel.

5.3. Vulnerability Management Regular vulnerability assessments and security testing should be conducted on generative AI tools and platforms to identify and address any security weaknesses or vulnerabilities.

6.0 Training and Awareness

6.1. Education and Training

Employees and relevant personnel should receive training on the responsible and secure use of generative AI. This training should cover topics such as ethical considerations, potential risks, security best practices, and compliance requirements.

6.2. Awareness Campaigns

Regular awareness campaigns and communications should be conducted to reinforce the importance of cybersecurity, responsible AI usage, and adherence to this policy.

7.0 Non-Compliance

Non-Compliance with this policy may result in disciplinary action, up to and including termination of employment or contract, and legal consequences if applicable laws are violated.

8.0 Policy Review

This policy will be reviewed periodically and updated as necessary to address emerging risks, technological advancements, regulatory changes, and other conditions as necessary.