The availability of generative artificial intelligence (GAI) and large language models (LLM) to the general public has created opportunities and challenges in education and, of course, much else. During the Fall, 2023, semester, the Provost tasked the Faculty Center for Learning Innovation (FCLI) Associate Dean with forming a committee of ACC faculty to develop a draft consideration for policy language from the faculty perspective regarding the use of artificial intelligence in the classroom. The committee included adjunct and full-time faculty who were already using GAI in the classroom and the language includes three options for the use of GAI: Prohibited, Permitted, and Required. This draft language is meant to be a starting point for the college to consider when developing an institutional policy. The committee was then tasked with developing a draft for syllabus language meant to be a starting point for faculty to consider placing in their syllabi.
As of June 2025, the Curriculum and Programs Shared Governance committee requires that all faculty include an AI policy in their syllabus. The framework below seeks to provide a useful guide for crafting a comprehensive AI policy. Additional resources and information can also be found at the TLED Guide to Generative AI webpage and the Collegewide AI Planning webpage.
What follows are the draft policy statements and examples.

Introduction
This policy outlines the guidelines and principles for the ethical and responsible use of Generative Artificial Intelligence (GAI) in all departments and programs within Austin Community College. GAI is a powerful tool that can enhance education and research, but its use must adhere to transparency, fairness, and ethical conduct principles. This policy is subject to periodic review and revision to align with evolving ethical standards, technological advancements, and college initiatives.
Faculty must balance countless responsibilities. Implementing an AI policy doesn’t have to be yet another burden, though — it can be a tool that protects your teaching, supports your students, and aligns with your values. You are not alone in this. The Collegewide AI Strategic Planning Committee, Faculty Center for Learning Innovation (FCLI), and TLED are here to help with examples, office hours, and support at every step.
Together, we can model what ethical, equitable, and forward-thinking AI use in education truly looks like.
Departmental Policy Framework
Each department/program will engage faculty in developing a policy related to GAI. Within departments/programs, faculty will have flexibility regarding how they incorporate GAI. However, individual instructor policies must incorporate the following required elements in the course syllabus:
A syllabus statement on the GAI course policy
- Introduction – Introduce your policy on GAI use in the course
- Rationale – State why and under what circumstances GAI is prohibited/permitted/required to be used in the course
- Definition of GAI – Define what GAI is in the context of the course
- Resources – In courses that permit or require GAI, provide resources that guide students on how to properly use GAI in the course
- Assessment – In courses that permit or require GAI, clarify if and how its use will be assessed in the completion of activities
- Penalties – Clearly state the consequences of violating the GAI policy
- Exceptions – Explain any conditions under which there may be an exception to the GAI policy in the course
- Usage Permissions –
- Prohibited – Clearly state what GAI activities are not allowed
- Permitted – Clearly state what GAI activities are allowed
- Required – Clearly state what GAI activities are required
Policy Choice Should Reflect Pedagogical Purpose
Before selecting or writing a GAI policy, instructors are encouraged to ask:
- What are the learning goals of this course, and how might AI tools support or hinder them?
- What level of digital literacy do my students have, and what responsibility do I have to guide or protect them?
- What are my own values and comfort level with AI tools, and how do I want those reflected in my classroom?
The choice between prohibited, permitted, or required GAI use should be made intentionally with a clear understanding of both the technology and the instructional goals.
“No AI” Policies Are Difficult to Enforce and Require Extra Care
Some instructors may choose to prohibit AI use altogether. This is a valid pedagogical decision, but one that comes with implementation challenges. Specifically:
- Detection tools like GPT detectors are unreliable and can produce false accusations. These tools often misclassify student writing, particularly from multilingual, neurodiverse, or first-generation students. Their use as evidence for academic dishonesty is STRONGLY DISCOURAGED by many national education organizations.
- Enforcement should not rely on “gut instinct” or AI detectors. Instead, faculty should ensure that expectations are clearly communicated, modeled, and discussed with students.
If you choose a “no AI” policy, you are encouraged to:
- Be explicit about what kinds of help are and are not allowed (e.g., Grammarly, ChatGPT, Study groups). The more detailed your delineation of acceptable and unacceptable GAI usage, the clearer the boundaries between appropriate and inappropriate usage will be to the benefit of both faculty and students.
- Provide examples of what constitutes AI-generated work and what does not
- Include a conversation early in the semester to clarify expectations and give students space to ask questions
Avoid a Policing Mindset: AI as a Teachable Moment
AI in education is here to stay—and our students will encounter these tools in academic, workplace, and personal contexts. Even in classes where GAI is limited or prohibited, faculty are encouraged to frame conversations about AI from a teaching mindset rather than a policing mindset.
Students are more likely to learn from mistakes or misunderstandings when they feel safe, supported, and seen as collaborators in their education.
- If a student uses AI inappropriately, begin with curiosity: Why did they use it? Did they understand the policy? Were they overwhelmed or underprepared?
- Use violations as opportunities for reflection and learning, especially in first offenses
Consider having students submit AI use statements or reflective memos about their process.
Speak Plainly and Early with Your Students
Many academic integrity issues stem from miscommunication or lack of specificity. Regardless of your stance on GAI, talk with your students about:
- What is and isn’t allowed in your course, with specific examples
- How GAI use will be checked, evaluated, or cited
- What integrity means in your discipline, and how AI fits into that picture
These conversations should be included in your syllabus, discussed during the first week of class, and reinforced throughout the semester by modeling the use of GAI and by reminders.
Examples
The following examples may be useful in crafting an appropriate syllabus statement for your the needs, goals, and curricula of your courses.
Example 1:
This course incorporates Generative Artificial Intelligence (GAI), including large language models and image generators. GAI is defined as artificial intelligence systems capable of creating new content based on patterns learned from existing knowledge. As responsible members of the academic community, we should use AI with consideration and intention. To use AI responsibly, students are encouraged to schedule a meeting with the instructor for guidance, understanding, and obtaining usage permissions; and critically reading available resources. Key considerations include citing AI as a source, preventing misinformation, avoiding plagiarism, and ensuring fairness while avoiding bias and discrimination. We must also respect intellectual property rights, maintain ethical authenticity, prioritize data privacy and security, and promote transparency and fairness in AI usage. These principles uphold academic honesty and integrity while engaging with AI technology in this course. The assessment of AI-generated content evaluates the quality, authenticity, and relevance of AI-produced content while considering student review and verification, comparison to human generated content, consistency, bias and fairness, appropriateness and relevance to the subject matter, plagiarism review, ethical and legal compliance, cross-validation against multiple AI models, AI transparency and explainability, and feedback from users, such as your professor. Penalties for academic dishonesty will be enforced, following approved College Guidelines, with exceptions granted only on valid grounds as approved by the instructor.
Example 2:
- Introduction: In this course, the use of generative AI (GAI) technologies is strictly prohibited to preserve academic integrity and ensure the development of student competencies.
- Rationale: The prohibition is in place to encourage original thought, manual problem-solving skills, and to maintain equity in educational opportunities and assessments.
- Definition of GAI: Generative AI refers to artificial intelligence systems that can generate text, images, or other content based on minimal input. This includes chatbots, image generation tools, and code assistants.
- Usage Permissions: Prohibited: Students are not allowed to use GAI for completing assignments, projects, tests, or any form of assessment in this course.
- Penalties: Any violation of this policy will result in academic penalties which may include a failing grade for the activity, reporting to academic affairs, and further disciplinary action.
- Exceptions: Exceptions to this policy will only be made under specific circumstances approved by the instructor, typically where technology is used to accommodate learning differences.
Example 3:
- Introduction: The use of generative AI (GAI) is permitted in this course under certain conditions to enhance learning while maintaining academic integrity.
- Rationale: GAI is permitted to foster technological fluency and to leverage advanced tools for research, as long as it does not substitute for critical thinking and learning.
- Definition of GAI: Generative AI encompasses technologies that create content through learned patterns and data without direct human input.
- Usage Permissions: Permitted: GAI can be used for initial research, idea generation, and learning coding practices. It is not to be used for final submissions unless explicitly cited and discussed.
- Resources: Guidance on the ethical and effective use of GAI will be provided through designated course materials and office hours.
- Assessment: Contributions of GAI must be clearly cited and will be assessed on the student’s ability to critically analyze and integrate the AI-generated content.
- Penalties: Misuse of GAI, including a failure to cite, will be considered a breach of academic integrity, with consequences including a failing grade for the assignment and academic review.
- Exceptions: Should the technology be required as an accommodation, exceptions will be made on a case-by-case basis.
Example 4:
- Introduction: This course requires the use of generative AI (GAI) to complete certain assignments and activities as part of its curriculum.
- Rationale: GAI is integrated into the course to ensure students are proficient in cutting-edge technologies and to enhance the scope of their academic exploration.
- Definition of GAI: Generative AI includes any software that uses artificial intelligence to generate content, code, or other outputs from user prompts.
- Usage Permissions: Required: Students are expected to use GAI for specific tasks, which will be clearly outlined in assignment guidelines.
- Resources: Resources, tutorials, and guidelines for using GAI will be provided. Students are expected to utilize these to ensure responsible use.
- Assessment: The use of GAI will be part of the grading rubric. Students will be evaluated on how effectively they use and integrate GAI in their work.
- Penalties: Failure to appropriately use GAI as required by the course will impact a student’s grade and may result in a need to retake the assignment or activity.
- Exceptions: No exceptions to the requirement of GAI use will be made without a formal accommodation request approved by the instructor.
A syllabus statement on data privacy and security
In courses that permit or require GAI, syllabi will inform students that in many cases, content shared with or produced by GAI platforms is available for use by the parent company of the technology platform. Thus, students should not share personally identifiable or otherwise confidential or sensitive information, such as student IDs, social security numbers, passwords, or medical and financial information.
Example 1:
In accordance with our dedication to privacy and security, students are advised to refrain from sharing any sensitive or personally identifiable information on GAI platforms. Given that content inputted into or generated by these platforms may become accessible to the platform’s operators, caution is advised. Always ensure your data is clean, accurate, and does not include personal information before interacting with these technologies.
A syllabus statement on academic honesty
In courses that permit or require GAI, syllabi will affirm a commitment to academic and personal integrity by requiring that student work reflect authentic (student-generated) effort and original critical thinking when GAI is used. Syllabi will advise students that, unless otherwise specified, use of GAI to produce the bulk of the thinking, writing, or other output for an assignment constitutes a breach of academic integrity. Syllabi will require that GAI-generated content or data sources must be cited and credited appropriately, just as any other source would be. Syllabi must specify any disciplinary actions or academic consequences that result from violations of academic honesty standards.
Example 1:
Generative AI tools, such as ChatGPT and others, are rapidly evolving technologies that have great potential in all realms of human endeavor, including teaching and learning. They also pose serious challenges, particularly with regard to academic integrity. At Austin Community College, the use of these tools in coursework, like any others, is subject to the same standards outlined in the college’s Academic Integrity policy.
Presenting AI-generated content as your own without proper attribution is considered a violation of academic integrity. All work you submit must reflect your own understanding and effort. If you use generative AI to help with your work, you must clearly acknowledge how and where it was used. Intellectual honesty is essential to a fair and supportive academic environment.
Individual instructors may set their own expectations and limitations regarding the use of generative AI tools in their classes, which should be clearly stated in the syllabus. To ensure you are complying with your course requirements, always consult with your instructor before using AI tools for assignments. Policies regarding the use of AI may vary by instructor; it is contingent on them to state them, and contingent on you to follow them.
To learn more about the college’s expectations around academic honesty, please refer to the college’s statement on Academic Integrity here and on the specific Academic Integrity Process website.
A syllabus statement on bias, discrimination, and falsehood
In courses that permit or require GAI, syllabi will inform students of dangers associated with the use of GAI, such as the production of false and/or biased information. Syllabi will inform students that information from GAI platforms that cannot be traced to its source cannot be considered accurate without verification.
Example 1:
Students are cautioned that GAI may inadvertently produce biased or inaccurate content. It is incumbent upon the student to critically evaluate and verify the information provided by these platforms. Relying on unverified GAI content for academic work is unacceptable and may lead to disciplinary action.
Summary and Action Steps for Faculty
To assist faculty with developing a syllabus policy, this checklist provides suggestions.
- Decide on your policy type. Choose prohibited, permitted, or required. Select what aligns with your goals, not what feels easiest or safest.
- Use the policy builder. Adapt the template provided with the 8 components. Add clarity and examples.
- Talk with your students. Plan a class conversation or activity in Week 1 to explain your policy. Ask questions and clarify misunderstandings.
- Don’t rely on detectors. Avoid using GPT detection tools as proof of misconduct. If a concern arises, talk with the student and look at intent, not just output.
Focus on teaching. When AI is misused, treat it as a moment to coach, not just correct. Offer guidance on responsible practices.
Departmental Employees and Staff
All department/program employees and staff will maintain the highest ethical standards in producing work of any kind for use within or outside the College. GAI should only be used transparently and with attention to dangers such as inaccuracy, privacy violations, copyright infringement, and perpetuation of bias and discrimination.
The initial guidelines were drafted by Amber Sarker, Christine Berni, Curtis Eckerman, Herbert Coleman, Job Hammond, LaKisha Barrett, Sara Farr, Susan Meigs, Thomas Samuel, and Stephanie Long. They have been expanded and further detailed by the Collegewide AI Strategic Planning Committee.
Back to Top