Skip to Main Content

Artificial Intelligence /Generative AI / Student Edition

Ethics (ChatGPT Prompt and Response)

"Please provide the top three types of ethical concerns when using GenAI. Use numbered responses with details specified under each type of ethical concern. Cite your sources in APA7 format."
1. Bias and Discrimination
 AI models, including generative models, often reflect and perpetuate biases present in the data they are trained on. If the training data contains biased representations—whether related to gender, race, age, or socioeconomic status—the AI model may unintentionally reproduce these biases in its outputs, leading to unfair or discriminatory results.

Bolukbasi, T., Chang, W. Y., Chang, J., & Zou, J. Y. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS 2016)https://arxiv.org/abs/1607.06520

2. Misinformation and Manipulation
GenAI systems have the ability to generate highly convincing and realistic text, images, and videos, which makes them susceptible to being used for spreading misinformation, disinformation, and even manipulation. Because these models can create realistic content with little effort, they can be weaponized to deceive people, mislead the public, or alter opinions.

Mirsky, Y., & Lee, J. (2020). The creation and detection of deepfakes: A survey. ACM computing surveys (CSUR)54(1), 1-41. https://arxiv.org/abs/2004.11138

3. Lack of Accountability and Transparency
One of the most significant ethical concerns surrounding GenAI is the lack of transparency in how these models make decisions or generate content. These "black-box" systems, due to their complexity, often operate without clear explanations for their behavior, making it difficult for users to understand why a model produces a certain output.

Diakopoulos, N. (2017). Enabling accountability of algorithmic media: Transparency as a constructive and critical lens. In T. Cerquitelli, D. Quercia, & F. Pasquale (Eds.), Transparent data mining for Big and Small Data, 25-43. Springer. https://link.springer.com/chapter/10.1007/978-3-319-54024-5_2

OpenAI. (2024). ChatGPT (Dec 26 version) [Large language model]. https://chat.openai.com/chat

Bias (ChatGPT Prompt and Response)

"Please specify the top three ways GenAI chatbot are biased. Use numbered entries and explain each type of bias. Cite your sources in APA7 format." 
1. Data Bias (Training Data Bias)
 GenAI chatbots are trained on large datasets that often contain biases from the real world. These datasets are typically scraped from the internet, which means they are likely to reflect historical, social, and cultural biases found in human-generated content. If a dataset includes more content from certain demographics or perspectives (e.g., Western, male, or affluent), the chatbot will disproportionately learn to reflect those viewpoints, while underrepresenting or misrepresenting others.

 Bolukbasi, T., Chang, W. Y., Chang, J., & Zou, J. Y. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS 2016)https://arxiv.org/abs/1607.06520

2. Algorithmic Bias (Model Architecture and Design Bias)
Even if the data itself is balanced or diverse, the way AI models are constructed can introduce bias. The choice of algorithms, model architectures, and even hyperparameters used during the training process can unintentionally favor certain patterns or outcomes. 

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186. https://doi.org/10.1126/science.aal4230

3. Deployment and Interaction Bias (User Interaction and Feedback Loops)
Once deployed, GenAI chatbots are influenced by how users interact with them, creating feedback loops that reinforce or exacerbate existing biases. For example, if a chatbot is used extensively in one region or demographic, it may start to generate responses that reflect the preferences or interests of that particular group. Over time, the model may "learn" to prioritize topics, language, or responses that reflect a narrower worldview or a particular set of interests, especially if these biases are continuously reinforced by users' inputs.

Xue, J., Wang, Y. C., Wei, C., Liu, X., Woo, J., & Kuo, C. C. J. (2023). Bias and fairness in chatbots: An overview. arXiv preprint arXiv:2309.08836.  https://arxiv.org/abs/2309.08836

OpenAI. (2024). ChatGPT (Dec 26 version) [Large language model]. https://chat.openai.com/chat

Environmental Impact (ChatGPT Prompt and Response)

"Please specify the top three types of environmental impacts GenAI makes on our planet. Use numbered responses with details specified under each type of environmental impact. Cite your sources in APA7 format."
1. Energy Consumption and Carbon Emissions
The carbon footprint of training large models can be massive. For example, training GPT-3 is estimated to emit about 284 tons of CO₂, which is comparable to the lifetime emissions of five cars. As AI systems become more advanced, the environmental toll of these processes increases unless more sustainable energy practices are adopted.

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), 3645–3650. https://doi.org/10.18653/v1/P19-1355

2. E-Waste and Hardware Resource Consumption
As AI models become larger and more complex, hardware components are often discarded after a few years, contributing to e-waste. The extraction of minerals for hardware production also leads to environmental degradation and pollution. For example, the mining of rare earth metals for GPUs can cause significant ecological damage, including water contamination and habitat destruction.

Delort, E., Riou, L., & Srivastava, A. (2023). Environmental Impact of artificial intelligence. INRIA; CEA Leti. 2023, pp.1-33. ffhal-04283245f. https://inria.hal.science/hal-04283245/document

3. Water Usage for Cooling Data Centers
In regions already experiencing water scarcity or drought, this heavy water demand can exacerbate environmental challenges. Data centers can use millions of gallons of water annually for cooling purposes, which can lead to depletion of local water sources and negatively impact surrounding ecosystems.

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less" thirsty": Uncovering and addressing the secret water footprint of AI models. https://arxiv.org/abs/2304.03271

OpenAI. (2024). ChatGPT (Dec 26 version) [Large language model]. https://chat.openai.com/chat