All articles
11 May 2026·4 min read·AI + human-reviewed

AI's Dual Impact: Precarious Creative Work and Psychological Diagnosis

Artificial intelligence is reshaping creative labor, turning screenwriters into AI trainers, while raising profound ethical questions about its use in mental health diagnosis. An analysis of AI's human impact.

AI's Dual Impact: Precarious Creative Work and Psychological Diagnosis

AI's Dual Impact: Precarious Creative Work and Psychological Diagnosis

Artificial intelligence is rapidly redefining the labor landscape and raising complex ethical questions, especially in creative sectors and mental health.

What happened

A recent report from [Wired](https://www.wired.com/story/i-work-in- Hollywood-everyone-who-used-to-make-tv-now-training-ai/) revealed how many Hollywood screenwriters, once engaged in creating television content, are now finding themselves in the role of artificial intelligence "trainers." This new type of gig work, described as "soul-crushing," has seen industry professionals complete up to 20 contracts in just 8 months for various platforms, indicating a rapid and often precarious transformation of creative careers. The activity involves refining and improving large language models (LLMs), a task that, although paid, is perceived as a professional downgrade and a loss of creative autonomy.

In parallel, the ability of LLMs to interpret complex narratives has been tested in a clinical setting. Research published on ArXiv compared the performance of Gemini Pro models with that of mental health professionals in diagnosing personality disorders, such as Borderline Personality Disorder (BPD) and Narcissistic Personality Disorder (NPD), based on autobiographical accounts. The results showed that AI models achieved an overall diagnostic score of 65.48%, outperforming human professionals by 21.91 percentage points in the analyzed sample. This raises significant questions about the potential encroachment of AI into highly sensitive and human-centric domains.

Why it matters

The transformation of creative work into "AI training" represents a wake-up call for the future of many professions. This is not merely about automation, but about deskilling that turns the art of storytelling into an activity of labeling and correction. This scenario threatens not only the economic sustainability of artists but also the richness and originality of cultural production, suggesting a future where human creativity is channeled to serve machines rather than directly inspire audiences. The precarization of these roles, often contract-based and without guarantees, highlights growing inequality in the AI-driven labor market.

The application of LLMs in diagnosing personality disorders, on the other hand, opens up complex and potentially risky scenarios. While the efficiency and data processing capabilities of algorithms may seem promising, psychological diagnosis requires a deep understanding of human context, empathy, and the ability to interpret nuances that go beyond linguistic patterns. Over-reliance on AI in this field could lead to misdiagnoses, a dehumanization of the care process, and serious implications for patient privacy. An algorithm's ability to identify patterns does not equate to a professional's clinical understanding or sensitivity.

The HDAI perspective

These developments underscore the urgency of an approach to ethical AI that places human beings at its core. The philosophy of Human Driven AI (HDAI) promotes a vision where technology serves people, improving their lives and not making their work precarious or compromising delicate sectors like mental health. It is crucial that AI development is accompanied by robust AI governance and clear regulatory frameworks that protect workers and patients. We must ensure that innovation does not come at the expense of human dignity and quality of care. The true challenge is not just technological, but ethical and social: how can we steer AI towards a future that values human contribution and protects vulnerabilities? These are central themes that will be discussed at the HDAI Summit 2026, where experts from around the world will converge to outline responsible paths for artificial intelligence in Italy and beyond.

What to watch

It will be crucial to observe how regulations, such as the EU AI Act, adapt to these new labor and clinical dynamics. The industry will need to develop more stringent ethical standards, and society will have to confront the need to reskill the workforce and clearly define the limits of AI's autonomous decision-making in sensitive contexts. The discussion on the AI future of work and responsible AI has just begun.

Share

Original sources(2)

AI & News Column, an editorial section of the publication The Patent ® Magazine|Editor-in-Chief Giovanni Sapere|Copyright 2025 © Witup Ltd Publisher London|All rights reserved

Related articles