A new report from Common Sense Media highlights a growing distrust among American teenagers toward major technology companies, with concerns spanning well-being, ethical decision-making, data privacy, AI responsibility, and political influence. This skepticism is shaping how teens engage with digital content, adapt to AI risks, and advocate for creative ownership protections in an evolving technological landscape.
The Common Sense Media report reveals a significant erosion of trust among American teenagers towards major technology companies. This decline in confidence spans various aspects of Big Tech operations and impacts:
Well-being and safety: A majority of teens expressed low trust in tech companies' commitment to their well-being and safety12.
Ethical decision-making: Most surveyed teens doubted the ability of Big Tech to make responsible ethical choices13.
Data privacy: Teens showed little confidence in tech companies' protection of their personal information12.
AI responsibility: Nearly half of the teens surveyed (47%) had little to no trust in tech companies making responsible decisions about AI usage13.
Political influence: The report highlights teens' awareness of tech companies' attempts to influence politics, such as donations to political campaigns, which further eroded trust1.
This widespread distrust among teens suggests a growing generational skepticism towards Big Tech, potentially influencing future consumer behaviors and policy discussions around technology regulation and digital rights.
The erosion of trust in Big Tech and concerns about AI have led to significant changes in how teens approach online interactions and information consumption. Adolescents are developing more sophisticated strategies to navigate the digital landscape:
Increased skepticism: Teens are becoming more critical of online content, taking time to consider whether news stories are true before sharing them.1 This heightened awareness reflects a growing understanding of the prevalence of misinformation.
Adaptive trust: Young people are learning to adjust their level of skepticism based on their previous experiences with online information quality.2 This adaptive approach allows them to make more nuanced judgments about digital content.
Emphasis on verification: Many teens are now actively seeking to verify information through multiple sources and fact-checking methods.3 This behavior demonstrates a growing commitment to digital literacy and critical thinking skills.
These changes in online behavior highlight the need for continued education in digital literacy and critical thinking to empower teens to navigate the complexities of the online world safely and effectively.4
The rise of AI-generated content has sparked new concerns among teens regarding intellectual property and creative rights in the digital age. With the U.S. Copyright Office stating that AI-generated content is generally not entitled to copyright protection without proof of human involvement1, young creators are grappling with the implications for their own work. Many teens are embracing AI as a tool to enhance their creativity, using it to research and innovate more quickly2. However, they're also becoming increasingly aware of the need to disclose AI use in their creative processes to maintain copyright protections1.
This evolving landscape presents both challenges and opportunities for young creatives. While AI may automate some entry-level tasks in industries like gaming3, it's also creating new roles in AI development, content curation, and "prompt engineering"4. As teens navigate this shifting terrain, they're advocating for clearer guidelines on AI use in content creation and stronger protections for creative rights5. This growing awareness reflects a generation that is both tech-savvy and increasingly cognizant of the complexities surrounding intellectual property in the digital realm.
The rise of AI technology has introduced new challenges in maintaining authenticity and safety in digital interactions, particularly for teens. Deepfake technology poses a significant threat to traditional authentication systems, potentially compromising biometric verification methods like facial recognition and voice analysis1. This development raises concerns about identity theft and fraud, especially as young people increasingly rely on digital platforms for various aspects of their lives.
To address these challenges, experts emphasize the need for transparency in AI systems and robust safeguards. This includes implementing effective detection, verification, and explainability mechanisms to counteract potential harms of generative AI2. For teens, who are particularly vulnerable to misleading online content3, developing critical thinking skills and digital literacy is crucial. Educating young users about the risks of AI-generated content and providing them with tools to verify information can help mitigate the impact of sophisticated disinformation campaigns that often target youth4.