Photographic truth? Time to ask questions.

The dire conditions of people in Gaza trying to survive without sufficient access to food, clean water, and medical care have been well documented by international media and human rights/aid organisations. These reports have also revealed the struggles of thousands of families with children displaced due to war. Amidst the chaos, every day new images emerge from the ground, shared and viewed millions of times online. But not all are real.
In the most recent wave of images circulated on social media are several AI-generated photos of children lying huddled together on the wet, muddy ground inside or in front of makeshift tents. Often accompanied with the colours of the Palestinian flag or the flag itself, these images suggest the subjects are displaced Gazan children. Given the scale of devastation in Gaza, these AI-generated images risk being engulfed in the fog of (social media) war, potentially gaining public attention and being inaccurately labelled as journalistic evidence when they are not. We also witnessed the same in the ongoing information warfare between Ukraine and Russia.
In addition to the disruption of truth and accuracy, these images present another serious challenge for journalism: a potential increase in public distrust in media by blurring the line between reality and fiction.
The proliferation of AI-generated images into mainstream journalism is making the war even more chaotic and confusing for the public and, in some cases, deceiving even journalists themselves. Take for example the case of Nordhessen-Journal, a regional German news site, which published AI-generated images of the war, as one would assume, by mistakenly deeming them authentic stock photos. Late last year, Crikey reported AI-generated images of the war in Israel-Gaza were purchased from Adobe Stock and were used across the internet without marking them as fake or AI-generated. Among other photorealistic stock images were fake photos of protests and children running away from bomb blasts in the Gaza strip.
Any information about children’s suffering, particularly from regions affected by conflict or war, tend to evoke a heightened sense of empathy – as was in the case of the image of the Syrian toddler whose body was found washed ashore a Turkish beach in 2015. How receptive we are to accepting these photos as real may be why debunking AI-generated images of children from war and conflict zones is potentially more challenging compared to low-stake fake political images.
But the important question to raise here is: where will this worsening inability to distinguish real information from fiction take news audiences? Will audiences end up drowning in an ocean of digitally altered truth or will we end up believing that ‘all visual information is disinformation’, consequently swaying the public discussion from the real issues.
With AI tools improving with time, robust regulatory systems and dedicated efforts to increase the general public’s media literacy becomes a pressing and indisputable need. In that way, it’s promising that Meta has decided it will label AI-generated images on Facebook and Instagram as part of a broader tech industry initiative to sort between what’s real and not.

Ayesha Jehangir, CMT Postdoctoral Fellow