Guidance on the use of Artificial Intelligence in BPS

Mark Racine Updated by Mark Racine

The purpose of this document is to ensure the responsible and ethical use of generative artificial intelligence (AI) in the Boston Public Schools. Generative AI such as ChatGPT is a type of AI that is designed to create content such as pictures or text. It is a powerful tool that can support student learning, help teachers with their instruction, and improve the efficiency of school & district operations. However, as with any new technology, it is important to be mindful of the potential risks and ensure that generative AI is used in a way that supports learning, safe, secure, and ethical.

Defining AI

Artificial Intelligence (AI)
A computer program/app that can perform tasks that typically require human intelligence.
Generative AI
A type of AI designed to create content, like text or images, based on vast amounts of data. Generative AI tools typically generate content in response to a prompt that explains the user’s need.

Guiding Principles

The following guidance is based on the White House’s Blueprint for an AI Bill of Rights (2023), and considers the five principles set forth in the White House guidance.

  • Academic Excellence: We support the use of generative AI in the classroom to improve the teaching & learning experience for all members of the BPS community. Productive use of generative AI in the academic setting includes collaboration prior to using AI and transparency between the educator & learner.
  • Cybersecurity & Privacy: The Boston Public Schools will use generative AI applications in accordance with their terms of service and in a manner consistent with local, state, and federal guidelines. For students in K-12, this includes following COPPA and FERPA laws and ensuring that BPS staff follow guidelines for applications that require parent and guardian permission for students under age 18. For any user, it is important to be mindful of the data that is shared with any generative AI tool.
  • Ethical Use of AI: Generative AI tools pull content from a variety of sources and do not always display the sources of that content. The lack of an author, however, does not mean that the content can be used without attribution. AI tools should be used with transparency and AI-generated content attributed appropriately.
  • Biases and Misinformation: AI can only learn from its source(s) and prompts, so it may perpetuate biases, misinformation, hallucinations, and problematic content of the original material.  The implicit bias of the generative AI designers is likely woven into the framework of the tool.  These issues have surfaced in problems with facial recognition and other types of AI software and demonstrates the importance of ensuring AI-generated content is always reviewed, challenged, and edited by humans.

Individual Use of AI

The Boston Public Schools respects an individual’s choice to use generative AI as an educational and/or productivity tool but encourages all members of the community to take the following steps to ensure appropriate use of generative AI in the classroom and work environment.

As a learner…
  • Speak with your teacher before using generative AI tools to understand the purpose of an assignment and how generative AI should/should not be used.
  • Avoid entering any personal information into generative AI tools. Many tools will use information gathered through prompts and may create privacy and security issues if you enter personal information.
  • When using generative AI for school, keep a record of the prompt you used as well as the output from the tool. This will help your teacher to understand how AI was used and distinguish between your work and the AI-generated material.
  • Follow guidelines provided by your teacher to properly attribute AI-generated content. 
  • Fact-check and proofread all AI-generated content for accuracy, bias, or potentially dangerous content.
As a teacher…
  • Post your expectations or the use of generative AI in your classroom, syllabus, or assignment instructions so your students know when & how generative AI can be used. Include a rationale for why AI should or should not be used on specific assignments.
  • Speak with your students on the use of generative AI so they understand when it is appropriate to use in school.
  • Provide guidance on how to attribute AI-generated content in student work. 
  • Consider posting this image in your classroom to use as a reference when discussing AI’s role in an assignment.
  • Consider the usage terms of the generative AI tools such age restrictions before using the tool in class. For example, ChatGPT is restricted for students under 13 and parental consent is needed for students between 13 and 17. 
  • Fact-check and proofread all AI-generated content for accuracy, bias, or potentially dangerous content. There are also some tools online to help teachers review for AI-generated content but they can sometimes produce false-positives.
  • Consider taking the time to introduce to students to learn about tech industry careers and also the criticality, biases, ethics, and misinformation that have been introduced to our society by using AI.
As an employee…
  • Communicate with your colleagues/peers when you use generative AI in the work or school environment.
  • Do not enter any private or confidential information into generative AI models/tools.
  • The use of confidential confidential/personal data (names, personally identifiable information, grades, IEPs, assessments, etc) is strictly prohibited with open/public generative AI models. This includes paid subscriptions to generative AI models like ChatGPT Plus.
  • Fact-check and proofread all AI-generated content for accuracy, bias, or other unwanted material.

Boston Public Schools’ Use of AI

The Boston Public Schools process for formally adopting generative AI tools will follow the standard procurement process (OIIT-2) with consideration for the following.

  • Tools/systems that utilize generative AI should be able to explain in plain language how their system/model utilizes AI and what data is/was used in the development of the data model. This explanation should be in a manner that is understandable to the greater Boston Public Schools community, including students, teachers, and parents.
  • Private/confidential data that is used by generative AI tools can not be used outside of their intended use or outside of the district’s control.
  • Tools that utilize generative AI should be piloted with their intended population prior to formal adoption. Pilots should be evaluated using the Racial Equity Planning Tool before formal adoption into the organization. When possible, generative AI models should provide controls to exclude data down to the student level.

How did we do?

Editing Ticket Fields in Kace via email