Case Study Assignment: Exploring Ethical Issues in AI Communication

Objective: To analyze and evaluate the ethical implications of AI in communication through a focused case study, integrating concepts and theories discussed in class.

Word Count: 900 words (excluding references)

<aside> 🧠 if you want to explore a case study not on the list, just ask me! I’m happy to work something out that you’re interested in exploring deeper.

Pro Tip: The point of the midterm is to for you to show off how you’ve incorporated class texts into your thinking. I’m looking to see you engage with the theories we’ve read.

</aside>

Requirements:

  1. Case Selection: Choose one case study or scenario from the provided list (. The case should involve ethical considerations related to AI in communication.
  2. Case Overview: Provide a concise overview of the selected case, including the relevant background information, key stakeholders, and the AI technology or application involved.
  3. Ethical Analysis: Apply ethical frameworks, theories, and concepts discussed in class to analyze the ethical issues presented in the case. Use at least three class texts to support your analysis. Consider the ethical dimensions, potential risks, and impacts on various stakeholders involved.
  4. Evaluation and Reflection: Assess the strengths and weaknesses of the existing ethical frameworks or guidelines that could be applied to the case. Reflect on the complexities and challenges of addressing the identified ethical issues within the context of AI in communication.
  5. Recommendations: Based on your analysis, propose specific recommendations or guidelines to address the ethical concerns raised in the case. Consider the responsible and ethical use of AI in communication and its implications for society, privacy, bias, transparency, accountability, and other relevant factors.
  6. Clarity and Structure: Present your case study in a well-organized manner, using clear and concise language. Structure your analysis logically with headings and subheadings, ensuring a coherent flow of ideas. Use proper citations and referencing for all sources, including the two class texts you incorporate.

<aside> 🧠 Engaging with theory does’t necessarily equate to agreeing with it. Theory is used to help explain phenomena and observations. So, you can uphold a theory, expand it, disprove it, or show how it’s applicable in a new context.

</aside>

Case Selection

  1. Facebook and Cambridge Analytica: This case deals with the ethical issues around data privacy and misuse. Cambridge Analytica, a political consulting firm, acquired data on millions of Facebook users without their consent. This data was used to influence voter opinion during the 2016 U.S. presidential elections.
  2. YouTube and Content Recommendation Algorithms: YouTube's content recommendation algorithms have been accused of promoting harmful and extremist content. Critics argue that the algorithm prioritizes user engagement over the quality or veracity of content, thereby contributing to misinformation and radicalization.
  3. Twitter and AI Bias: In 2020, Twitter's image cropping algorithm was accused of racial and gender bias. Despite Twitter's attempts to create a neutral AI, the model seemed to favor people with lighter skin tones and women in its previews.
  4. Instagram and Mental Health: Instagram's algorithm prioritizes content that gets the most engagement, often promoting harmful trends and potentially negatively impacting users' mental health. This case study allows students to explore ethical questions around user wellbeing, algorithmic responsibility, and the potential need for intervention in content curation.
  5. Amazon's Rekognition and Racial Bias: Amazon's facial recognition software, Rekognition, has been criticized for its accuracy and biases, particularly against people of color and women. This case study invites examination of the ethical issues involved in the deployment of AI systems, including bias, fairness, and accountability.
  6. TikTok and User Content Moderation: TikTok's content moderation algorithms have been under scrutiny for their perceived bias, notably for suppressing content from disabled, non-binary, and plus-size creators. This case prompts discussion about bias in AI, platform responsibility, freedom of expression, and the potential for negative social impact.
  7. Google Photos and Racial Misclassification: In 2015, Google Photos' image recognition software misclassified two African Americans as gorillas, prompting significant backlash. The incident illustrates the potential harm and offense caused by AI errors, and raises issues around oversight and redress mechanisms in AI systems.