by Khatia Leonidze, Batumi Shota Rustaveli State University, Georgia

This article is the result of a joint initiative bringing together the Media & Learning Association and the Georgian National Communications Commission (ComCom) to promote and provide a platform for young researchers in digital and media literacy. This paper is by the Winner in 3rd place in the student analytical paper competition “Become a Media Literacy Researcher” organised by ComCom. This article represents a condensed version of the full-length academic paper. The complete research, including the full bibliography, is available here.
Why Artificial Intelligence in Matters in Media?
In the 21st century, the media landscape is undergoing a transformation unlike any before. What once relied primarily on human judgment, editorial routines and institutional authority is now increasingly shaped by algorithms, automated systems and machine learning models. Artificial intelligence (AI) has moved from the periphery of media production to its very core, reshaping how information is created, distributed and consumed.
Traditional media outlets – long regarded as gatekeepers of public discourse – are now navigating a technological environment in which speed, personalisation and automation dominate. This shift raises a fundamental question: does AI represent progress for the media ecosystem, or does it threaten its credibility, integrity and democratic role?
The significance of this topic extends far beyond technical innovation. AI is not merely altering newsroom workflows; it is transforming social, political, legal and ethical frameworks that underpin modern media systems. Its integration into journalism and digital platforms exemplifies a phenomenon of dual nature. On the one hand, AI increases efficiency, accelerates content production and enhances data analysis. On the other, it introduces serious risks related to information reliability, journalistic ethics, personal data protection and the large-scale dissemination of disinformation.
These concerns are intensified by the absence of a fully harmonised legal framework. Although the European Union (EU) has taken significant steps – most notably through the adoption of the AI Act – regulatory efforts continue to lag behind the rapid pace of technological development. Against this backdrop, the present article explores how AI is reshaping the media space and examines its inherently dual character.
AI in the Media Ecosystem
While definitions of AI vary across jurisdictions, one of the most comprehensive is provided by the EU’s AI Act. According to Article 3, AI refers to machine-based systems capable of operating autonomously and generating outputs such as predictions, recommendations, or content based on data processing. These capabilities help explain why AI has been so rapidly integrated into the media sector.
Today, AI plays a central role in shaping information flows. Algorithmic recommendation systems decide which news stories users encounter, often prioritising relevance and engagement over public interest. Automated tools assist journalists in data analysis, content generation and fact-checking, while moderation systems filter illegal or harmful material, potentially contributing to safer online environments.
Yet these developments are accompanied by growing uncertainty. AI-generated content raises unresolved legal questions concerning copyright and intellectual property. It remains unclear who owns such content: the developer of the system, the media organisation that deploys it, or the user who prompts it (Grünke et al., 2024). At the same time, algorithmic personalisation risks narrowing users’ informational horizons. By continuously reinforcing existing preferences, AI-driven systems contribute to “filter bubbles” that limit exposure to diverse perspectives and intensify social polarisation (Nikolinakos, 2023).
AI, Disinformation and the Crisis of Trust
Perhaps the most alarming challenge posed by AI in media is its role in the rapid and large-scale spread of disinformation. While false information has existed throughout history, AI dramatically accelerates its production and dissemination. Generative systems can now produce convincing text, images, and videos on a scale, blurring the line between authentic journalism and fabricated narratives.
A striking example occurred in May 2023, when an AI-generated image depicting an explosion near the Pentagon circulated widely on social media. Despite being false, the image triggered public panic and confusion before authorities confirmed its inauthenticity (O’Sullivan & Passantino, 2023). This incident illustrates what has been described as the “Trust Laundering Effect,” whereby fabricated content gains credibility by mimicking journalistic aesthetics and circulating through trusted channels (The Guardian, 2023).
Deepfakes pose an even greater threat. In 2024, a fake video impersonating a “France 24” journalist spread false claims about the President of France. The video convincingly replicated the visual and linguistic markers of professional journalism, demonstrating how easily AI can be weaponised to manipulate public perception (France 24, 2024). Such cases do not merely mislead audiences; they erode trust in media institutions as a whole, weakening the foundations of informed public discourse.
Findings and Conclusion: Navigating AI’s Dual Nature
The analysis presented in this study demonstrates that AI does not introduce entirely new risks but rather amplifies existing vulnerabilities within the media ecosystem. While AI holds immense potential to enhance efficiency, accuracy, and accessibility, it simultaneously intensifies concerns related to ethical standards, bias, transparency and the erosion of professional journalistic identity.
Despite regulatory initiatives such as the EU’s AI Act, current legal frameworks remain fragmented and insufficient to address the full scope of AI’s impact on media. As a result, AI clearly embodies a dual nature: it functions both as a driver of innovation and as a potential threat to democratic processes and public trust. Addressing this tension requires a balanced and multidimensional approach. Legal regulation must be complemented by ethical standards, institutional accountability and media literacy initiatives that empower both journalists and audiences. Only by recognising and actively managing the dual nature of AI can its positive potential be harnessed while minimising the risks of misuse. In the evolving media environment, the challenge is no longer whether AI will shape the future of journalism, but whether that future will strengthen or undermine the democratic role of the media.



