Policies

Use of Generative AI in Employment Policy

Use of Generative AI in Employment Policy

Purpose

The purpose of this Use of Generative AI in Employment Policy (“Policy”) is to provide guidelines for the use of generative artificial intelligence (“AI”) (e.g., ChatGPT, etc.) to employees of Lindenwood University and those performing work and/or services for Lindenwood. Generative AI is a type of artificial intelligence that can generate content, such as art, music, or text, and other forms of creative content typically associated with human creativity. Generative AI systems are trained on large volumes of written information, referred to as Large Language Models (LLM), using deep learning techniques to generate new creative content. This policy addresses the use of Generative AI in employment in this rapidly changing and developing area and is subject to updates that correspond to developments in Generative AI.

Scope

This policy applies to all employees of Lindenwood University and those performing work and/or services for Lindenwood.

This Policy covers the use of Generative AI for University purposes. It does not apply to the use of Generative AI for purely personal reasons.

Policy

  1. Definitions
    • 1.1. “Application” means a software program that runs on a System.
    • 1.2. “University” or “Lindenwood” means Lindenwood University.
    • 1.3. “Legal” means the University’s Office of General Counsel.
    • 1.4. “Personal Data” means all information relating to an identified or identifiable individual.
    • 1.5. “IT” means information technology.
    • 1.6. “Network” means a group of computer systems and other computing hardware devices that are connected through communication channels, such as the Internet, to facilitate communication and resource-sharing among Users.
    • 1.7. “System” means all IT equipment, including personal and University owned, connecting to the Network, or accessing Applications. This includes, but is not limited to, desktop computers, laptops, smartphones, tablets, printers, data and voice Networks, networked devices, software, electronically stored data, portable data storage devices, third-party networking services, and telephone handsets.
    • 1.8. “Users” means persons who have access to any System. This includes employees, contractors, contingent workers, agents, consultants, vendors, service providers, suppliers, and other third parties.
  2. Use Case Dependent
    • 2.1. Whether Users should use Generative AI in the performance of their job for Lindenwood is dependent on the specific use case for which Users are using Generative AI. The issues raised with the use of Generative AI to assist with menial tasks (e.g., asking what the weather forecast is) are much different than skilled tasks (e.g., drafting outward-facing University publications). Users must exercise good and sound judgment, consistent with the guidelines in this Policy, prior to using Generative AI for University purposes.
  3. Processing Activities Use of Generative AI in support of University activities is divided into the following four categories: (1) Prohibited, (2) High-Risk, (3) Medium Risk, and (4) Low Risk usage activities.
    • 3.1 Prohibited Processing Activities. Users must not engage in the following activities with Generative AI:
      • a. No Non-Publicly Available Personal Data. Users must not enter non-publicly available Personal Data into Generative AI (e.g., Social Security numbers, medical records, financial information, and driver’s license numbers).
      • b. No Student Information. Users must not enter student information into Generative AI.
      • c. No University Confidential Information. Users must not enter University confidential information into Generative AI. This includes items such as meeting notes, proprietary information, financial records or analysis, images, audio, video, and nonpublic data and information.
      • d. Human Resources. Users must not enter the following HR information into Generative AI: hiring (including job posting), promotion, discipline, or termination of employees.
      • e. Legal and Compliance. Drafting legal documents (including contracts), compliance reports, or other legal or regulatory activities with potential legal implications.
    • 3.2 High Risk Processing Activities. The following activities are considered high risk. Users must (1) receive written approval in advance by the vice president of their division or college for the use of Generative AI for any of these activities and (2) have human intervention, review and/or approval before being used or relied upon:
      • a. Public Documents. Preparation of publicly-facing University statements, press releases, advertisements, promotions, or similar written material.
      • b. Decision Making. Generating insights or recommendations that directly influence crucial decisions, such as strategic planning, financial investment, or operational changes.
      • c. Student Inquiries. Automatically generating responses to student inquiries, prompts or complaints.
      • d. Predictive Modelling. Predicting or anticipating future events, trends, or behaviors based on data analysis.
      • e. Product Development. Creating, enhancing, or diversifying products or services by generating concepts, designs, and/or solutions.
    • 3.3 Medium Risk Processing Activities. Prior to using Generative AI for any use not covered in Sections 3.1 and 3.2 above, Users must analyze and document the following factors and determine that such use outweighs the risk of harm to the University based on these factors below.   
      • a. Data Privacy and Security. The use of Generative AI must comply with all privacy, cybersecurity, education laws such as the Family Educational Rights and Privacy Act (FERPA), and institutional policies.
      • b. Bias and Discrimination. The use of Generative AI must not result in bias and/or discrimination against any student, employee, and/or other individual.
      • c. Plagiarism. The use of Generative AI must not result in plagiarism.
      • d. Copyright Infringement. The use of Generative AI must not result in copyright infringement.
      • e. Misinformation. Generative AI can produce inaccurate or misleading information. The use of Generative AI must not result in the University producing a public document that contains incorrect, inaccurate, or misleading information.
      • f. Confidentiality. Search queries entered into Generative AI are capable of being reverse engineered. The use of Generative AI must not result in the University breaching a duty of confidentiality.
      • 3.3.1. Users will report the analysis required by this Section to their supervisor at least ten (10) business days prior to engaging in the use. The receiving supervisor must approve the use and may reclassify the use, when appropriate. If the use is reclassified to a Prohibited Processing Activity, then Users may not engage in the use. If the use is reclassified to a High Risk Processing Activity, then the individual making the request for use must comply with the requirements under Section 3.2 above.
    • 3.4 Low Risk Processing Activities. The following activities are low risk activities for which no review is required:
      • a. The use of Generative AI for personal use that does not involve any of the activities or risks described in Sections 3.1 to 3.3 above.
      • b. Personal Use is considered a Low Risk Processing Activity for the University. This risk analysis does not take into account any possible risks to the individual using Generative AI for personal purposes. Activities and the types of inputs for Personal Use are subject to an individual’s own discretion.
      • c. The use of Generative AI for instructional purposes that does not involve any of the activities or risks described in Sections 3.1 to 3.3 above and does not violate existing University polices on confidentiality, data protection, academic integrity or any other policy.
  4. Independent Validation
    • 4.1. Generative AI can produce an output that is inaccurate, incorrect, misleading, or violates copyright or other legal requirements.
    • 4.2. All work product created using Generative AI must be independently validated for accuracy and legality consistent with the guidelines in this Policy and applicable law.
  5. Transparency
    • 5.1. The use of Generative AI should be transparent. For example, the use of Generative AI with chatbots should be disclosed to and by the User. If creating a document, data, and/or information using Generative AI, its use should be disclosed and made clear on the document.
    • The following is stock language that can be used for disclosure purposes from OpenAI’s Sharing and Publication Policy:
      • “The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model [or insert other Generative AI used]. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.”
  6. Exceptions
    • 6.1. Exceptions to this Policy may be considered in very limited circumstances when potential risk and harm to the University are mitigated and should be directed to the University’s Legal Office in advance.
  7. Training
    • 7.1. Regular and ongoing mandatory training will be provided to employees on this rapidly evolving and changing topic through the Lindenwood Learning Academy.