Skip to Main Content Pathways College

Artificial Intelligence (Generative) Resources

Welcome to this library guide on artificial intelligence/AI. The guide is a valuable resource that will allow you to get more familiar with AI, including some key concepts, impacts, and approaches.

Ethics of AI

Misinformation & Bias in AI

Misinformation

While generative AI tools can help users with such tasks as brainstorming for new ideas, organizing existing information, mapping out scholarly discussions, or summarizing sources, they are also notorious for not relying fully on factual information or rigorous research strategies. In fact, they are known for producing "hallucinations," an AI science term used to describe false information created by the AI system to defend its statements. Oftentimes, these "hallucinations" can be presented in a very confident manner and consist of partially or fully fabricated citations or facts.

Certain AI tools have even been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous. 

Additionally, the information presented by generative AI tools may lack currency as some of the systems do not necessarily have access to the latest information. Rather, they may have been trained on past datasets, thus generating dated representations of current events and the related information landscape.

Bias

Another potentially significant limitation of AI is the bias that can be embedded in the products it generates. Fed immense amounts of data and text available on the internet, these large language model systems are trained to simply predict the most likely sequence of words in response to a given prompt, and will therefore reflect and perpetuate the biases inherent in the inputted internet information. An additional source of bias lies in the fact that some generative AI tools utilize reinforcement learning with human feedback (RLHF), with the caveat that the human testers used to provide this feedback are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.       

Related Recommendations  

  • Meticulously fact-check all of the information produced by generative AI, including verifying the source of all citations the AI uses to support its claims.
  • Critically evaluate all AI output for any possible biases that can skew the presented information. 
  • Avoid asking the AI tools to produce a list of sources on a specific topic as such prompts may result in the tools fabricating false citations. 
  • When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
  • Always remember that generative AI tools are not search engines--they simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms.

Artificial Intelligence and Academic Integrity

Plagiarism

Generative AI tools present new challenges to academic integrity, particularly concerning plagiarism. Plagiarism is generally defined as presenting someone else's work or ideas as your own. Although a generative AI tool may not be classified as a "person," using text generated by such a tool without proper citation is still considered plagiarism because the work is not the researcher's original creation. Policies for using and crediting generative AI tools can vary from class to class, so reviewing the syllabus and seeking clarification from the professor as needed is essential.

A note about plagiarism detection tools:

 

Several AI detection tools are currently available for publishers and institutions. However, there are concerns regarding their accuracy, as they may produce false accusations. Generative AI tools do not replicate large portions of text verbatim from existing works, which makes it challenging for automated tools to detect instances of plagiarism effectively. 

False Citations

Another area of academic integrity impacted by GAI tools is the issue of false citations.

Providing false research citations, intentionally or unintentionally, violates copyright law. Tools like ChatGPT, which use Generative AI, have produced inaccurate citations. Even if the citations refer to actual papers, the content derived from them in ChatGPT may still be incorrect.

Related Recommendations
  • If GAI tools are only allowed for topic development in the early stages of research, you may not need to cite them at all. However, it is still essential to confirm this with your professor first.
  • When providing commentary or analysis on text generated by a chatbot, it is essential to include a citation if you are paraphrasing its results or quoting it directly. For more information on how to cite AI tools, please refer to the Citation Guide page of this guide.
  • It is always essential to verify citations for accuracy. If you cite information from a source, credit the source rather than reference the AI tool you use.
Copyright © 2017 Pathways College | Pathways College Home | Contact Pathways
320 N. Halstead St, Ste. 220 Pasadena, CA 91107 • 888-532-7282