Skip to Main Content
Ask Us

Artificial Intelligence in Academics

This guide is created for the FSCJ community, with resources and information about Artificial Intelligence that can help students, faculty, and the greater community.

AI Literacy


Below are guidelines and tips for becoming "AI literate" --that is, gaining skills that enable you to use AI effectively, ethically, safely, and in a way that supports your learning.

*** In general, if you do use AI for any of your FSCJ classwork, please keep these important considerations in mind.***

Be open and honest about your use of AI 

  • If you use an AI tool like ChatGPT for classroom work, acknowledge it, so that your professor knows. 
  • For example, if you use ChatGPT to draft a classroom discussion post for you, add a statement like this to the post, so you’re completely transparent about having used AI: “I used ChatGPT to write a first draft of this post. I critically evaluated the accuracy of ChatGPT’s draft, verifying facts and ideas, then I largely rewrote the AI draft in my own words and phrases.” 
  • If needed, you can even cite an AI tool like ChatGPT in your reference list for a writing assignment. Here are guidelines: APAMLAChicago.

Verify AI content 

  • AI tools like ChatGPT are imperfect. They are known to create content that simply isn’t true. 
  • If you use AI to generate a piece of writing for you, you have to critically evaluate everything that it wrote. Use a search engine like Google to check any facts or ideas generated by AI. 
  • The one thing you can never do is simply put a prompt into ChatGPT for a classroom assignment, then copy and paste the AI-created content and submit it to your professor as is. That is the opposite of the kind of engaged, active learning that helps students grow intellectually. When AI does the work for you, you miss out on the learning, which can have repercussions for your future classes and career.
  • If you use AI, think of it as an assistant who’s efficient but not a real expert on the subject matter. You have to carefully check what AI wrote before using it as a starting point for your work. 

Don’t overshare with AI 

  • ChatGPT and other AI tools are like any other website where you type in information. Be careful to keep your personal information safe. Use a secure computer network when interacting with AI so that hackers cannot intercept information. And never type in sensitive, personal information when you query AI. For example, if you use ChatGPT to research Social Security, don’t type in your own SSN!  

input and output

  • AI cannot access information behind paywalls, including the library journals and databases.  
  • It was not taught to determine fact from fiction. 
  • Due to the nature of machine learning, AI may generate similar content. The AI's output may be the same or similar across users.  

Considerations when Using AI Tools


When we are doing research online, we need to think critically about the sources we use and if we want to build our research off these sources. Here are some questions we ask ourselves:

  • How relevant is this to my research?
  • Who/what published this? When was it published? 
  • Why was this published?
  • Where did the information in here come from? 

We also must ask ourselves questions when using AI software tools. The LibrAIry has created the ROBOT test to consider when using AI technology.

Reliability

Objective

Bias

Ownership

Type

Reliability

  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials or bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is the information that they produce?

Objective

  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?

Bias

  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?

Owner

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?

Type

  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention?