How might UF Faculty and Students Utilize Generative AI?
There are many applications of generative AI (Gen AI) applications in teaching and learning. NaviGator Chat is a secure UF-hosted tool for generating text in a conversational manner, and Microsoft Copilot is also approved for use when signed in with your UF account. Other tools, such as ChatGPT, should not be used with appropriate risk assessments and approvals. On this page, NaviGator Chat is mentioned by name for convenience, but other Large Language Models (LLMs) can be used in the same way. Here are a few ways that NaviGator Chat and other text generators could be used to enhance teaching and learning:
- Tutoring and Learning Assistance: NaviGator Chat can be an excellent source of tutoring for students. Students can use NaviGator Chat to get a simplified explanation of a general topic, to have a transcript of an explanation reworded for easier understanding, to have the purpose of a sample of programming code explained, and more. The ability to ask potentially silly questions in a non-judgmental environment might be extremely beneficial for students that are nervous about approaching TAs or attending office hours. Language model AIs will become prominent tools in the learning process, much like how graphing calculators made visualizing functions much easier when they were first released.
- Language Translation: NaviGator Chat can be used to practice conversations in other languages, though students must be able to verify the accuracy of the phrases and responses that NaviGator Chat provides.
- Content Creation: Faculty can use NaviGator Chat to quickly create small assignments or rubrics that can be edited to fit the needs of the course. NaviGator Chat can also be used to brainstorm ways to approach explaining complex topics at a simpler level. For example, a language model in NaviGator Chat can be asked to “Explain general relativity at a high school level” to get a starting point for a lecture.
- Brainstorming: NaviGator Chat can be used to quickly brainstorm ideas for lectures or assessments. By having the AI generate outlines of a lecture series or list potential ways to assess knowledge about a topic, an instructor could then use those starting points to craft a syllabus or lesson plan for upcoming courses.
The AI for Student Success page has example prompts for these activities that students can use.
The Tech Byte webinar titled AI Prompt Cookbook: Generative AI Recipes Designed to Enhance Teaching presented many ideas on ways to use generative AI to support teaching. The recording (54:09) and the accompanying “cookbook” of ideas are both available for viewing. The CITT Tech Byte page also has links to future and past Tech Byte events.
Limitations of Large Language Models
While powerful, generative AI powered by large language models (LLMs) do have limitations on its abilities.
- Overly Confident: AI tools powered by large language models (LLMs) are often extremely confident in the phrasing of answers to questions. The text generated by LLMs often does not acknowledge other potential answers that may be more correct, and little or no indication is given to the “probability” that the answer provided is the best answer. Using LLMs to answer research questions can lead to inaccurate conclusions. For example, AI will possibly assert false information or claim to have gotten information from non-existent sources, a phenomenon sometimes called “hallucinations”. All facts provided by generative AI should be independently verified.
- Potentially Biased or Inaccurate: The training for this model is sourced on a vast amount of text on the internet, but this data might suffer from biases or contain inaccuracies that will be replicated because of the prevalence of those patterns on the internet.
- Lack of Recent Knowledge: Large language models have a "knowledge cutoff" period where they will not have been trained on information past that date. Some AI tools can use external sources of data, but AI models are not trained to the same extent on these other data sources and instead use search results for context while trying to respond to a prompt.
- Lack of References: LLMs are often not able to analyze specific works or provide relevant references for its answers. In addition, generative AI can sometimes create fictional references when asked to justify its responses.
Taking Generative AI into Account when Designing Assessments
Generative AI tools like NaviGator Chat are able to write human-like text about nearly any topic. This has led to concerns about the ways that students can use the tool to violate academic integrity policies. On the other hand, using Navigator Chat and other AI tools is a valuable skill that students should have the opportunity to learn in preparation for potential real-world applications. There are a few ways that faculty can design assessments to encourage the proper use of AI tools and to minimize students’ ability and incentives to misuse AI in writing assignments.
- Set clear expectations and guidelines: Many instructors wish to allow students to use the latest tools and to gain proficiency with using AI to accomplish tasks. It is important to send clear messages about when students can and cannot use generative AI in their courses. For example, you may permit students to use AI to summarize research literature, but disallow the use of generative AI when writing their own impressions of the studies.
For the purpose of providing examples, there is a public Google Document that has collected a number of syllabus statements regarding the use of generative AI in classes. This document only serves to provide examples, and this is not a recommendation for the use of any of the statements without modification for your own requirements and the requirements of your department or college. - Ensure assignments have alignment with student learning objectives: It is important that students perceive the value of the assignment and understand how it relates to the objective of the course. Students may be tempted to utilize shortcuts when assignments seem to have no purpose.
- Use authentic assessments: Create assignments that use case studies or require students to produce work that is similar to real world situations. Designing assignments that clearly help students with workforce readiness will increase motivation and reduce the likelihood that AI is used in inappropriate ways.
- Provide Alternative Formats for Assessment using Universal Design for Learning: Allow students to show that they have met the student learning objectives through formats other than writing. Presentations, debates, creative works, infographics, recorded videos, and podcasting are just a few of the ways that students could demonstrate their understanding of the course content. The practice of offering options for students to succeed is a part of Universal Design for Learning (UDL).
- Ask for Specifics and References: Instead of asking students to write about general topics, ask students to analyze specific arguments in reading material. An example could be an assignment to contrast two arguments about a single topic that was made by specific authors in the field using the examples provided in class. Overall, AI is more effective at writing general statements than it is at addressing specific points of view with quotations and references to personal events to support the arguments.
- Use Recent Events or Material: Large language models are trained on data up to a certain date that varies based on the model, so assessments that reference more recent events or published articles may not be as readily completed by AI. However, many AI tools are now able to search for additional context to answer questions, or context can be provided to an AI in order to help it write an intelligent response. Solely relying on recent events or published articles does not prevent students from using AI in unauthorized ways.
- Use Offline Material: In your assessments, use events or discussions that took place in your classroom that should be analyzed or referenced by your students in their work.
- Utilize University Resources: There are multiple resources available to students, faculty, and staff located on the UF scholar and campus libraries websites which could be used to enhance and add information to assignments.
- Request Personal Impressions: Ask students to explain their personal experiences or impressions on a topic.
- Break Down Assignments: Consider breaking down larger, written assignments into an outline submission, a literature review submission, and multiple draft submissions. Also consider converting some parts of the written assignments into other multimedia formats, such as a recorded video or podcast, a drawing, a trifold brochure, or other inventive formats.
- Consider Flipped Classes: A flipped class is one where the majority of the instructional time takes place outside of the classroom, while the assessment activities (group work, quizzes, iClicker assignments, etc.) take place during the class period.
- Test ChatGPT: When designing an assignment, consider testing the ability of NaviGator Chat yourself to determine its ability to write on the topic.
The Tech Byte webinar titled AI Impacts on Teaching and Learning discussed many of the concerns about generative AI's impacts on teaching and learning. This webinar had a brief overview of ChatGPT and considered course and assignment design strategies in light of this new technology. The recording (1:25:34) and the PowerPoint slides for this presentation are available for viewing. The CITT Tech Byte page also has links to future and past Tech Byte events.
Generative AI Detectors
There are several tools available or in development that claim to determine whether a sample of text was written by a human or by an AI. These tools should be approached with caution, however, as they are still in their infancy and are prone to false positives as well as false negatives. To offer an anecdote, a CITT staff member input the opening paragraph of a paper written during their Master’s program, and the detector said there was a 99.6% probability that the text was written by an AI. The negative impacts of falsely accusing a student of using a generative AI are great, and both sides of an academic misconduct claim of this nature have few tools determine guilt or prove innocence. Even OpenAI, the creators of ChatGPT, acknowledged the challenge and discontinued offering an AI detector because of its inaccuracies.
Therefore, it is not recommended that you solely rely on these detectors to claim instances of academic misconduct on the part of students. New and improved detectors are rapidly being developed that may change this answer, but more capable generative AI tools are also released on a regular basis. Detecting AI-generated text is difficult because large language models (LLMs) use real human writing in its training, can be prompted to write in a variety of styles that can foil detection, and they carry a high risk of false positives. Instead, you might design assessment strategies that are more resistant to academic misconduct, require multiple drafts of writing assignments to be submitted, or you might update the point allocation in your course to more heavily weigh assessments where AI could not be used.