When Reality Collapses by Leith Benkhedda: Difference between revisions

From Hackers & Designers
(Created page with "{{Article |MainNavigation=No }} Whilst providing us with new tools for image production, digital technologies have also transformed our understanding of reality, changing our...")
 
No edit summary
Line 2: Line 2:
|MainNavigation=No
|MainNavigation=No
}}
}}
Whilst providing us with new tools for image production, digital technologies have also transformed our understanding of reality, changing our relationship to the fabrication and circulation of images channeled by social networks, online video platforms and mainstream media. These ideas were further discussed by Erika Balsom—a critic based in London, working on cinema, art, and their intersection—during her keynote presentation Rehabilitating observation—Lens-Based Capture and the ‘Collapse’ of Reality for the 2018 edition of the Sonic Acts festival in Amsterdam. In this presentation Balsom also emphasised the importance of documentary practices during a time of planetary crisis and the emerging post-truth era, where holding back reality seems to be an emergency.
Whilst providing us with new tools for image production, digital technologies have also transformed our understanding of reality, changing our relationship to the fabrication and circulation of images channeled by social networks, online video platforms and mainstream media. These ideas were further discussed by Erika Balsom—a critic based in London, working on cinema, art, and their intersection—during her keynote presentation ''Rehabilitating observation—Lens-Based Capture and the ‘Collapse’ of Reality'' for the 2018 edition of the Sonic Acts festival in Amsterdam. In this presentation Balsom also emphasised the importance of documentary practices during a time of planetary crisis and the emerging post-truth era, where holding back reality seems to be an emergency.
 
“For Hollywood, it is special effects. For covert operators in the US Military and intelligence agencies, it is a weapon of the future. Once you can take any kind of information and reduce it into ones and zeros, you can do some pretty interesting things”. 



“For Hollywood, it is special effects. For covert operators in the US Military and intelligence agencies, it is a weapon of the future. Once you can take any kind of information and reduce it into ones and zeros, you can do some pretty interesting things”.[1]


[[File:PastedGraphic.png]]
Edwin Catmull, President of Pixar
Edwin Catmull, President of Pixar


In December 1972, after $25.4 billion was invested in the Apollo program, US Citizens with a CRT TV could witness Eugene Cernan walking on the moon—the last man to do so—before his return to Earth after a 12 day mission. This feat reinforced, once again, the idea that the US enjoyed “technological superiority” over the USSR. These images have been subject to suspicion and conspiracy theories for the last few decades, which suggest that the images were shot in a studio. During the same year A Computer Animated Hand, a one minute movie produced by the computer scientist Edwin Catmull (now president of Pixar and Walt Disney Animation Studios) and Fred Park, was released for their graduation at the University of Utah. This film, one of the earliest experiments in computer-generated imagery, was made of 350 digitised polygons which generated a simplified representation of Edwin’s hand surface. The hand was then animated in a program Catmull wrote that would later set the standards for today’s 3D software technology. An echo of these two events reverberated a few months ago when Nvidia—a technology company based in Santa Clara, US, and a leader in the industry of graphic cards manufacturing since its invention in 1999—published a video to promote the new architecture of their products that allowed real time ray-tracing rendering. During the launch of their RTX series, Nvidia’s founder said “using this type of rendering technology, we can simulate light physics and things are going to look the way things should look”. In this video, they have created an entirely computer generated reenactment of the moon landing through a game engine called Unreal, and use this to prove the authenticity of the Apollo Mission images from 1972. Their project unconsciously underlines the ambiguous relationship we now have to the images presented to us. Contemporary image production technologies are now used to dismiss the doubts they created, in a context where the public can apparently no longer tell the difference between the real and the constructed image. A fact accidentally proven by Kim Laughton and David O’Reilly and their common project.
In December 1972, after $25.4 billion was invested in the Apollo program, US Citizens with a CRT TV could witness Eugene Cernan walking on the moon—the last man to do so—before his return to Earth after a 12 day mission. This feat reinforced, once again, the idea that the US enjoyed “technological superiority” over the USSR. These images have been subject to suspicion and conspiracy theories for the last few decades, which suggest that the images were shot in a studio. During the same year ''A Computer Animated Hand'', a one minute movie produced by the computer scientist Edwin Catmull (now president of Pixar and Walt Disney Animation Studios) and Fred Park, was released for their graduation at the University of Utah. This film, one of the earliest experiments in computer-generated imagery, was made of 350 digitised polygons which generated a simplified representation of Edwin’s hand surface. The hand was then animated in a program Catmull wrote that would later set the standards for today’s 3D software technology. An echo of these two events reverberated a few months ago when Nvidia—a technology company based in Santa Clara, US, and a leader in the industry of graphic cards manufacturing since its invention in 1999—published a video to promote the new architecture of their products that allowed real time ray-tracing rendering. During the launch of their RTX series, Nvidia’s founder said “using this type of rendering technology, we can simulate light physics and things are going to look the way things should look”.[2] In this video, they have created an entirely computer generated reenactment of the moon landing through a game engine called Unreal, and use this to prove the authenticity of the Apollo Mission images from 1972. Their project unconsciously underlines the ambiguous relationship we now have to the images presented to us. Contemporary image production technologies are now used to dismiss the doubts they created, in a context where the public can apparently no longer tell the difference between the real and the constructed image. A fact accidentally proven by Kim Laughton and David O’Reilly and their common project.


[[File:NVIDIA.png]]
NVIDIA re-enactment of the moon landing in Unreal Engine, 2018
NVIDIA re-enactment of the moon landing in Unreal Engine, 2018


Kim Laughton and David O’Reilly are two CG artists based in Shanghaï and Los Angeles respectively. Through the online blogging platform Tumblr, they collaborated on a project titled #HyperrealCG which consists of a collection of banal images presented as still lifes, landscapes, portraits etc. whose captions provide more information about the authors and the softwares used for their production. Its purpose was to showcase “the world’s most impressive and technical hyper-real 3D art”. The images published on the website are in fact photographs collected while browsing the web, or taken by Laughton and O’Reilly themselves. Their enterprise was meant as a comment on a fellow artist who was driven by the desire to achieve photorealistic work, and considered this parameter a sign of artistic quality. The Huffington post published a clickbait article titled ‘You Won’t Believe These Images Aren’t Photographs’, later apologising and re-titling the article ‘You Won’t Believe These Images Aren’t Photographs, Because They Are Photographs’. The hashtag #HyperrealCG spread throughout the web. Eventually the two artists revealed their intentions through their Twitter account, as: making a good joke, but “not trying to fool anyone”.
Kim Laughton and David O’Reilly are two CG artists based in Shanghaï and Los Angeles respectively. Through the online blogging platform Tumblr, they collaborated on a project titled ''#HyperrealCG'' which consists of a collection of banal images presented as still lifes, landscapes, portraits etc. whose captions provide more information about the authors and the softwares used for their production. Its purpose was to showcase “the world’s most impressive and technical hyper-real 3D art”.[3] The images published on the website are in fact photographs collected while browsing the web, or taken by Laughton and O’Reilly themselves. Their enterprise was meant as a comment on a fellow artist who was driven by the desire to achieve photorealistic work, and considered this parameter a sign of artistic quality. The Huffington post published a clickbait article titled ‘You Won’t Believe These Images Aren’t Photographs’, later apologising and re-titling the article ‘You Won’t Believe These Images Aren’t Photographs, Because They Are Photographs’. The hashtag #HyperrealCG spread throughout the web. Eventually the two artists revealed their intentions through their Twitter account, as: making a good joke, but “not trying to fool anyone”.[4]


[[File:HYPERREALCG.png]]
From #HYPERREALCG
From #HYPERREALCG


We are experiencing an unprecedented crisis regarding the trust current citizens have in professional news media. This distrust has emerged from the ‘alternative’ facts, post-truth politics and fake news that have become part of our daily news cycle since Donald Trump’s presidential campaign and election in 2016. Computer generated imagery and artificial intelligence technologies are getting better at creating convincing illusions. Meanwhile, their use by hoaxers and propagandists to generate fake or doctored content reinforces the doubts we have regarding the authenticity of visual media.
We are experiencing an unprecedented crisis regarding the trust current citizens have in professional news media. This distrust has emerged from the ‘alternative’ facts, post-truth politics and fake news that have become part of our daily news cycle since Donald Trump’s presidential campaign and election in 2016. Computer generated imagery and artificial intelligence technologies are getting better at creating convincing illusions. Meanwhile, their use by hoaxers and propagandists to generate fake or doctored content reinforces the doubts we have regarding the authenticity of visual media.


In 2017 Eliot Higgins—the founder of Bellingcat, a cloud-based organisation leading international investigations—discovered that still images extracted from a screenshot of the video game AC-130 Gunship Simulator: Special Ops Squadron’s promotional video had been shared by the Russian Ministry of Defence on Twitter. Their intention when releasing these images was that they would act as a piece of evidence, proving the United States collaboration with the Islamic State of Iraq and the Levant (ISIL). This post was accompanied by the statement: “The US are actually covering the ISIS combat units to recover their combat capabilities, redeploy, and use them to promote the American interests in the Middle East”. A screenshot from this game shows a low-resolution, black and white birds-eye view, overlayed by a crosshair in the middle of the screen where the player opens fire on their targets. The interface of the game vulgarly replicates the aesthetics of military drone monitors, again showing us the extent of the overlap between real and constructed images, and our lack of understanding regarding this.
In 2017 Eliot Higgins—the founder of Bellingcat, a cloud-based organisation leading international investigations—discovered that still images extracted from a screenshot of the video game ''AC-130 Gunship Simulator: Special Ops Squadron''’s promotional video had been shared by the Russian Ministry of Defence on Twitter. Their intention when releasing these images was that they would act as a piece of evidence, proving the United States collaboration with the Islamic State of Iraq and the Levant (ISIL). This post was accompanied by the statement: “The US are actually covering the ISIS combat units to recover their combat capabilities, redeploy, and use them to promote the American interests in the Middle East”.[5] A screenshot from this game shows a low-resolution, black and white birds-eye view, overlayed by a crosshair in the middle of the screen where the player opens fire on their targets. The interface of the game vulgarly replicates the aesthetics of military drone monitors, again showing us the extent of the overlap between real and constructed images, and our lack of understanding regarding this.


[[File:AC-130.png]]
AC-130 Gunship Simulator: Special Ops Squadron
AC-130 Gunship Simulator: Special Ops Squadron


In an age of untrustworthy images, when fake videos are all over the internet, computer generated imagery seems a perfect medium to weaponise. You can ask Nicolas Cage. Artificial intelligence allows us to put words in anybody’s mouth. Far from Josh Kline’s ‘Hope and Change’ face substitutions, which were made through the use of machine learning a few years earlier, Supasorn Suwajanakorn (a researcher from the University of Washington’s Graphics and Imaging Laboratory) describes how one image is enough to simulate a three dimensional facial model. This is detailed in his paper Synthesizing Obama: Learning Lip Synch From Audio (2017). Although more footage would be necessary to reproduce the various imperfections and wrinkles that each facial expression generates.  
In an age of untrustworthy images, when fake videos are all over the internet, computer generated imagery seems a perfect medium to weaponise. You can ask Nicolas Cage. Artificial intelligence allows us to put words in anybody’s mouth. Far from Josh Kline’s ‘Hope and Change’ face substitutions, which were made through the use of machine learning a few years earlier, Supasorn Suwajanakorn (a researcher from the University of Washington’s Graphics and Imaging Laboratory) describes how one image is enough to simulate a three dimensional facial model. This is detailed in his paper ''Synthesizing Obama: Learning Lip Synch From Audio'' (2017). Although more footage would be necessary to reproduce the various imperfections and wrinkles that each facial expression generates.  


To complete the three dimensional model, “a simple averaging method” would sharpen the colours and textures, allowing us to have a fully controllable facial model we could then drive using any video as an input. Stanford University had already developed another program with the same purpose, but in this case the results were rendered in real time using a webcam only as input. Similar open-source algorithms are now available under the MIT Licence, and are free of use on platforms like Github. However, generating images through the use of machine learning is not an easy task, and doesn’t happen at the click of a button. A basic understanding of programming is still required. Unfortunately, the accuracy of the results produced by these technologies are only a part of the problem, along with the costs and technicalities. Its very existence creates a new source of doubt in a time we could already qualify as uncertain. A climate of ambient and collective paranoia is reinforced by conspiracy theories of all kinds, which may not remain confined to the darkest areas of the web.
To complete the three dimensional model, “a simple averaging method” would sharpen the colours and textures, allowing us to have a fully controllable facial model we could then drive using any video as an input. Stanford University had already developed another program with the same purpose, but in this case the results were rendered in real time using a webcam only as input. Similar open-source algorithms are now available under the MIT Licence, and are free of use on platforms like Github. However, generating images through the use of machine learning is not an easy task, and doesn’t happen at the click of a button. A basic understanding of programming is still required. Unfortunately, the accuracy of the results produced by these technologies are only a part of the problem, along with the costs and technicalities. Its very existence creates a new source of doubt in a time we could already qualify as uncertain. A climate of ambient and collective paranoia is reinforced by conspiracy theories of all kinds, which may not remain confined to the darkest areas of the web.


[[File:Supasorn Suwajanakorn.png]]
Supasorn Suwajanakorn, TED talk, 2018
Supasorn Suwajanakorn, TED talk, 2018


In 2016 Julian Assange, whistle blower and founder of the project WikiLeaks, was interviewed by the journalist and documentary filmmaker John Pilger for the television network RT (formerly Russia Today), to discuss the US Elections and Hillary Clinton’s campaign. This video was published on YouTube, and in it we can see irregularities and glitches at certain points in the interview. In another video called Top 5 Reasons Julian Assange Interview with John Pilger is FAKE #ProofOfLife #WhereisAssange, the author sets up a parallel relationship between the software mentioned earlier, and the visual anomalies (glitches) we can see in the interview. After 6:54 seconds of conspiracy theories a badly recorded voice with a strong British accent merges with an anxiogenic soundscape and tries to convince me that Assange’s interview is fake. I almost bought it. The constant growth of our ability to doctor images or generate photorealistic and physically accurate 3D renderings is now closely linked with our progressive inability to believe any image at all.
In 2016 Julian Assange, whistle blower and founder of the project WikiLeaks, was interviewed by the journalist and documentary filmmaker John Pilger for the television network RT (formerly Russia Today), to discuss the US Elections and Hillary Clinton’s campaign. This video was published on YouTube, and in it we can see irregularities and glitches at certain points in the interview. In another video called ''Top 5 Reasons Julian Assange Interview with John Pilger is FAKE #ProofOfLife #WhereisAssange'', the author sets up a parallel relationship between the software mentioned earlier, and the visual anomalies (glitches) we can see in the interview. After 6:54 seconds of conspiracy theories a badly recorded voice with a strong British accent merges with an anxiogenic soundscape and tries to convince me that Assange’s interview is fake. I almost bought it. The constant growth of our ability to doctor images or generate photorealistic and physically accurate 3D renderings is now closely linked with our progressive inability to believe any image at all.


[[File:5reasonsJulianAssange.png]]
5 Reasons Julian Assange Interview with John Pilger is FAKE? #WhereisAssange #ProofOfLife
5 Reasons Julian Assange Interview with John Pilger is FAKE? #WhereisAssange #ProofOfLife


“Democracy depends upon a certain idea of truth: not the babel of our impulses, but an independent reality visible to all citizens. This must be a goal; it can never fully be achieved. Authoritarianism raises when this goal is openly abandoned, and people conflate the truth with what they want to hear. Then begins a politics of spectacle, where the best liars with the biggest megaphones win”.  
“Democracy depends upon a certain idea of truth: not the babel of our impulses, but an independent reality visible to all citizens. This must be a goal; it can never fully be achieved. Authoritarianism raises when this goal is openly abandoned, and people conflate the truth with what they want to hear. Then begins a politics of spectacle, where the best liars with the biggest megaphones win”.[6]


With the support of the AI Foundation, Supasorn Suwajanakorn is currently developing a browser extension called Reality Defender, described as “the first of the Guardian AI technologies that we are building on our responsibility platform. Guardian AI is built around the concept that everyone should have their own personal AI agents working with them through human-AI collaboration, initially for protection against the current risks of AI, and ultimately building value for individuals as the nature of society changes as a result of AI”. This software is designed to scan every image or video looking for artificially generated content, helping the user to identify the ‘fakes’ that they could encounter on the web. This project came to life partly to counter the potential misuse of his own work, creating another cat-and-mouse game where using algorithms against themselves seems to be the only solution to the problems they create.
With the support of the AI Foundation, Supasorn Suwajanakorn is currently developing a browser extension called ''Reality Defender'', described as “the first of the Guardian AI technologies that we are building on our responsibility platform. Guardian AI is built around the concept that everyone should have their own personal AI agents working with them through human-AI collaboration, initially for protection against the current risks of AI, and ultimately building value for individuals as the nature of society changes as a result of AI”.[7] This software is designed to scan every image or video looking for artificially generated content, helping the user to identify the ‘fakes’ that they could encounter on the web. This project came to life partly to counter the potential misuse of his own work, creating another cat-and-mouse game where using algorithms against themselves seems to be the only solution to the problems they create.


When a woman named Jennifer sat on a beach in the island of Bora Bora over 30 years ago, she couldn’t have imagined that the intimate moment she was sharing with her partner—John Knoll—and his camera would be a turning point for the future of image manipulation. The image taken on that day provided the ground for the current conversations around image manipulation, making reality suddenly moldable. John Knoll and his future wife Jennifer were both employees of ILM (Industrial Light and Magic), one of the first, and still one of the largest, companies specialising in special effects. ILM was founded in 1975 for the upcoming production of the first Star Wars film, directed by George Lucas. Assisted by his brother Thomas, who gained a doctorate in Computer Vision at the University of Michigan, John Knoll designed one of the first raster graphics editors. Digitised images were uncommon then, and as they needed to provide an image along with the software in order for their clients to play with it, John scanned the only image he had to hand that day—the picture of Jennifer. The software came to be what we now know as Photoshop, and was bought by Adobe Systems in 1989. The picture of Jennifer in Bora Bora, titled ‘Jennifer in Paradise’, is an important artefact in image and software history, and had disappeared until 2010 when Adobe decided to publish the video ‘Photoshop: The First Demo’ on their YouTube channel in order celebrate the software’s anniversary. Shortly after this release, the Dutch artist Constant Dullaart, whose work and research revolves around the realm of internet and its culture, decided to reconstruct this emblematic picture out of screenshots extracted from the Photoshop demo. He later used it as the main material for a project titled ‘Jennifer in Paradise’, an attempt to recreate Photoshop filters by engraving a set of glass sheets. This project brought new importance to an image that many people have forgotten, or never knew about in the first place, allowing it to find its new place on the web.
When a woman named Jennifer sat on a beach in the island of Bora Bora over 30 years ago, she couldn’t have imagined that the intimate moment she was sharing with her partner—John Knoll—and his camera would be a turning point for the future of image manipulation. The image taken on that day provided the ground for the current conversations around image manipulation, making reality suddenly moldable. John Knoll and his future wife Jennifer were both employees of ILM (Industrial Light and Magic), one of the first, and still one of the largest, companies specialising in special effects. ILM was founded in 1975 for the upcoming production of the first Star Wars film, directed by George Lucas. Assisted by his brother Thomas, who gained a doctorate in Computer Vision at the University of Michigan, John Knoll designed one of the first raster graphics editors. Digitised images were uncommon then, and as they needed to provide an image along with the software in order for their clients to play with it, John scanned the only image he had to hand that day—the picture of Jennifer. The software came to be what we now know as Photoshop, and was bought by Adobe Systems in 1989. The picture of Jennifer in Bora Bora, titled ‘Jennifer in Paradise’, is an important artefact in image and software history, and had disappeared until 2010 when Adobe decided to publish the video ‘Photoshop: The First Demo’ on their YouTube channel in order to celebrate the software’s anniversary. Shortly after this release, the Dutch artist Constant Dullaart, whose work and research revolves around the realm of internet and its culture, decided to reconstruct this emblematic picture out of screenshots extracted from the Photoshop demo. He later used it as the main material for a project titled ‘Jennifer in Paradise’, an attempt to recreate Photoshop filters by engraving a set of glass sheets. This project brought new importance to an image that many people have forgotten, or never knew about in the first place, allowing it to find its new place on the web.


[[File:Jennifer in Paradise.png]]
Jennifer in Paradise, 1988
Jennifer in Paradise, 1988


We are living in a time of uncertainty, a time where reality is apparently under attack. That is if it hasn’t already collapsed under the pressure of postmodernist academia, the spectacle of contemporary politics, and the technological development of image production techniques with the rise of computer generated imagery. This technological development has allowed us to perceive and shape new worlds, and has trespassed from the surface of the screen to inhabit our physical lives. As the filmmaker, artist and writer Hito Steyerl suggests in her essay Too Much World: Is the Internet Dead?, “reality itself is post-produced and scripted”, meaning that “the world can be understood but also altered by its own tools”.
We are living in a time of uncertainty, a time where reality is apparently under attack. That is if it hasn’t already collapsed under the pressure of postmodernist academia, the spectacle of contemporary politics, and the technological development of image production techniques with the rise of computer generated imagery. This technological development has allowed us to perceive and shape new worlds, and has trespassed from the surface of the screen to inhabit our physical lives. As the filmmaker, artist and writer Hito Steyerl suggests in her essay ''Too Much World: Is the Internet Dead?'', “reality itself is post-produced and scripted”, meaning that “the world can be understood but also altered by its own tools”.[8]
 
 
 
[1] T.Khue, D. (1999) ‘When Seeing and Hearing Isn’t Believing’, ''The Washington Post'', 1 February. Available at: https://www.washingtonpost.com/gdpr-consent/?destination=%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm%3f&utm_term=.2f3ff52306d3 (Accessed 19 March 2019).
 
[2] Caulfield, B. (2018) ‘By the Light of the Moon’, Nvidia Blogs 11 October. Available at: https://blogs.nvidia.com/blog/2018/10/11/turing-recreates-lunar-landing (Accessed 19 March 2019).
 
[3] #HYPERREALCG. Available at: http://hyperrealcg.tumblr.com/ (Accessed 19 March 2019).
 
[4] O’Reilly, D. (2015), Twitter, 3 March. Available at: https://twitter.com/davidoreilly (Accessed 19 March 2019).
 
[5] Walker, S. (2017) ‘Russia’s ‘irrefutable evidence’ of US help for ISIS appears to be video game still’, ''The Guardian'' 14 November. Available at: https://www.theguardian.com/world/2017/nov/14/russia-us-isis-syria-video-game-still (Accessed 19 March 2019).


[6] Snyder, T. (2018) ‘Fascism is back. Blame the Internet’. ''Washington Post'', 21 May. Available at: https://www.washingtonpost.com/news/posteverything/wp/2018/05/21/fascism-is-back-blame-the-internet/?noredirect=on&utm_term=.c0b577399ab4 (Accessed on 19 March 2019).


[7] Reality Defender website. Available at: http://www.aifoundation.com/responsibility (Accessed on 19 March 2019).


[1] T.Khue, D. (1999) ‘When Seeing and Hearing Isn’t Believing’, The Washington Post, 1 February. Available at: https://www.washingtonpost.com/gdpr-consent/?destination=%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm%3f&utm_term=.2f3ff52306d3 (Accessed 19 March 2019).
[8] Steyerl, H. (2013) ‘Too Much World: Is the Internet Dead?’. ''e-flux''. November. Available at: https://www.e-flux.com/journal/49/60004/too-much-world-is-the-internet-dead/ (Accessed on 19 March 2019).

Revision as of 18:12, 28 December 2020

MainNavigation No

Whilst providing us with new tools for image production, digital technologies have also transformed our understanding of reality, changing our relationship to the fabrication and circulation of images channeled by social networks, online video platforms and mainstream media. These ideas were further discussed by Erika Balsom—a critic based in London, working on cinema, art, and their intersection—during her keynote presentation Rehabilitating observation—Lens-Based Capture and the ‘Collapse’ of Reality for the 2018 edition of the Sonic Acts festival in Amsterdam. In this presentation Balsom also emphasised the importance of documentary practices during a time of planetary crisis and the emerging post-truth era, where holding back reality seems to be an emergency.

“For Hollywood, it is special effects. For covert operators in the US Military and intelligence agencies, it is a weapon of the future. Once you can take any kind of information and reduce it into ones and zeros, you can do some pretty interesting things”.[1]

PastedGraphic.png Edwin Catmull, President of Pixar

In December 1972, after $25.4 billion was invested in the Apollo program, US Citizens with a CRT TV could witness Eugene Cernan walking on the moon—the last man to do so—before his return to Earth after a 12 day mission. This feat reinforced, once again, the idea that the US enjoyed “technological superiority” over the USSR. These images have been subject to suspicion and conspiracy theories for the last few decades, which suggest that the images were shot in a studio. During the same year A Computer Animated Hand, a one minute movie produced by the computer scientist Edwin Catmull (now president of Pixar and Walt Disney Animation Studios) and Fred Park, was released for their graduation at the University of Utah. This film, one of the earliest experiments in computer-generated imagery, was made of 350 digitised polygons which generated a simplified representation of Edwin’s hand surface. The hand was then animated in a program Catmull wrote that would later set the standards for today’s 3D software technology. An echo of these two events reverberated a few months ago when Nvidia—a technology company based in Santa Clara, US, and a leader in the industry of graphic cards manufacturing since its invention in 1999—published a video to promote the new architecture of their products that allowed real time ray-tracing rendering. During the launch of their RTX series, Nvidia’s founder said “using this type of rendering technology, we can simulate light physics and things are going to look the way things should look”.[2] In this video, they have created an entirely computer generated reenactment of the moon landing through a game engine called Unreal, and use this to prove the authenticity of the Apollo Mission images from 1972. Their project unconsciously underlines the ambiguous relationship we now have to the images presented to us. Contemporary image production technologies are now used to dismiss the doubts they created, in a context where the public can apparently no longer tell the difference between the real and the constructed image. A fact accidentally proven by Kim Laughton and David O’Reilly and their common project.

NVIDIA.png NVIDIA re-enactment of the moon landing in Unreal Engine, 2018

Kim Laughton and David O’Reilly are two CG artists based in Shanghaï and Los Angeles respectively. Through the online blogging platform Tumblr, they collaborated on a project titled #HyperrealCG which consists of a collection of banal images presented as still lifes, landscapes, portraits etc. whose captions provide more information about the authors and the softwares used for their production. Its purpose was to showcase “the world’s most impressive and technical hyper-real 3D art”.[3] The images published on the website are in fact photographs collected while browsing the web, or taken by Laughton and O’Reilly themselves. Their enterprise was meant as a comment on a fellow artist who was driven by the desire to achieve photorealistic work, and considered this parameter a sign of artistic quality. The Huffington post published a clickbait article titled ‘You Won’t Believe These Images Aren’t Photographs’, later apologising and re-titling the article ‘You Won’t Believe These Images Aren’t Photographs, Because They Are Photographs’. The hashtag #HyperrealCG spread throughout the web. Eventually the two artists revealed their intentions through their Twitter account, as: making a good joke, but “not trying to fool anyone”.[4]

HYPERREALCG.png From #HYPERREALCG

We are experiencing an unprecedented crisis regarding the trust current citizens have in professional news media. This distrust has emerged from the ‘alternative’ facts, post-truth politics and fake news that have become part of our daily news cycle since Donald Trump’s presidential campaign and election in 2016. Computer generated imagery and artificial intelligence technologies are getting better at creating convincing illusions. Meanwhile, their use by hoaxers and propagandists to generate fake or doctored content reinforces the doubts we have regarding the authenticity of visual media.

In 2017 Eliot Higgins—the founder of Bellingcat, a cloud-based organisation leading international investigations—discovered that still images extracted from a screenshot of the video game AC-130 Gunship Simulator: Special Ops Squadron’s promotional video had been shared by the Russian Ministry of Defence on Twitter. Their intention when releasing these images was that they would act as a piece of evidence, proving the United States collaboration with the Islamic State of Iraq and the Levant (ISIL). This post was accompanied by the statement: “The US are actually covering the ISIS combat units to recover their combat capabilities, redeploy, and use them to promote the American interests in the Middle East”.[5] A screenshot from this game shows a low-resolution, black and white birds-eye view, overlayed by a crosshair in the middle of the screen where the player opens fire on their targets. The interface of the game vulgarly replicates the aesthetics of military drone monitors, again showing us the extent of the overlap between real and constructed images, and our lack of understanding regarding this.

AC-130.png AC-130 Gunship Simulator: Special Ops Squadron

In an age of untrustworthy images, when fake videos are all over the internet, computer generated imagery seems a perfect medium to weaponise. You can ask Nicolas Cage. Artificial intelligence allows us to put words in anybody’s mouth. Far from Josh Kline’s ‘Hope and Change’ face substitutions, which were made through the use of machine learning a few years earlier, Supasorn Suwajanakorn (a researcher from the University of Washington’s Graphics and Imaging Laboratory) describes how one image is enough to simulate a three dimensional facial model. This is detailed in his paper Synthesizing Obama: Learning Lip Synch From Audio (2017). Although more footage would be necessary to reproduce the various imperfections and wrinkles that each facial expression generates.

To complete the three dimensional model, “a simple averaging method” would sharpen the colours and textures, allowing us to have a fully controllable facial model we could then drive using any video as an input. Stanford University had already developed another program with the same purpose, but in this case the results were rendered in real time using a webcam only as input. Similar open-source algorithms are now available under the MIT Licence, and are free of use on platforms like Github. However, generating images through the use of machine learning is not an easy task, and doesn’t happen at the click of a button. A basic understanding of programming is still required. Unfortunately, the accuracy of the results produced by these technologies are only a part of the problem, along with the costs and technicalities. Its very existence creates a new source of doubt in a time we could already qualify as uncertain. A climate of ambient and collective paranoia is reinforced by conspiracy theories of all kinds, which may not remain confined to the darkest areas of the web.

Supasorn Suwajanakorn.png Supasorn Suwajanakorn, TED talk, 2018

In 2016 Julian Assange, whistle blower and founder of the project WikiLeaks, was interviewed by the journalist and documentary filmmaker John Pilger for the television network RT (formerly Russia Today), to discuss the US Elections and Hillary Clinton’s campaign. This video was published on YouTube, and in it we can see irregularities and glitches at certain points in the interview. In another video called Top 5 Reasons Julian Assange Interview with John Pilger is FAKE #ProofOfLife #WhereisAssange, the author sets up a parallel relationship between the software mentioned earlier, and the visual anomalies (glitches) we can see in the interview. After 6:54 seconds of conspiracy theories a badly recorded voice with a strong British accent merges with an anxiogenic soundscape and tries to convince me that Assange’s interview is fake. I almost bought it. The constant growth of our ability to doctor images or generate photorealistic and physically accurate 3D renderings is now closely linked with our progressive inability to believe any image at all.

5reasonsJulianAssange.png 5 Reasons Julian Assange Interview with John Pilger is FAKE? #WhereisAssange #ProofOfLife

“Democracy depends upon a certain idea of truth: not the babel of our impulses, but an independent reality visible to all citizens. This must be a goal; it can never fully be achieved. Authoritarianism raises when this goal is openly abandoned, and people conflate the truth with what they want to hear. Then begins a politics of spectacle, where the best liars with the biggest megaphones win”.[6]

With the support of the AI Foundation, Supasorn Suwajanakorn is currently developing a browser extension called Reality Defender, described as “the first of the Guardian AI technologies that we are building on our responsibility platform. Guardian AI is built around the concept that everyone should have their own personal AI agents working with them through human-AI collaboration, initially for protection against the current risks of AI, and ultimately building value for individuals as the nature of society changes as a result of AI”.[7] This software is designed to scan every image or video looking for artificially generated content, helping the user to identify the ‘fakes’ that they could encounter on the web. This project came to life partly to counter the potential misuse of his own work, creating another cat-and-mouse game where using algorithms against themselves seems to be the only solution to the problems they create.

When a woman named Jennifer sat on a beach in the island of Bora Bora over 30 years ago, she couldn’t have imagined that the intimate moment she was sharing with her partner—John Knoll—and his camera would be a turning point for the future of image manipulation. The image taken on that day provided the ground for the current conversations around image manipulation, making reality suddenly moldable. John Knoll and his future wife Jennifer were both employees of ILM (Industrial Light and Magic), one of the first, and still one of the largest, companies specialising in special effects. ILM was founded in 1975 for the upcoming production of the first Star Wars film, directed by George Lucas. Assisted by his brother Thomas, who gained a doctorate in Computer Vision at the University of Michigan, John Knoll designed one of the first raster graphics editors. Digitised images were uncommon then, and as they needed to provide an image along with the software in order for their clients to play with it, John scanned the only image he had to hand that day—the picture of Jennifer. The software came to be what we now know as Photoshop, and was bought by Adobe Systems in 1989. The picture of Jennifer in Bora Bora, titled ‘Jennifer in Paradise’, is an important artefact in image and software history, and had disappeared until 2010 when Adobe decided to publish the video ‘Photoshop: The First Demo’ on their YouTube channel in order to celebrate the software’s anniversary. Shortly after this release, the Dutch artist Constant Dullaart, whose work and research revolves around the realm of internet and its culture, decided to reconstruct this emblematic picture out of screenshots extracted from the Photoshop demo. He later used it as the main material for a project titled ‘Jennifer in Paradise’, an attempt to recreate Photoshop filters by engraving a set of glass sheets. This project brought new importance to an image that many people have forgotten, or never knew about in the first place, allowing it to find its new place on the web.

Jennifer in Paradise.png Jennifer in Paradise, 1988

We are living in a time of uncertainty, a time where reality is apparently under attack. That is if it hasn’t already collapsed under the pressure of postmodernist academia, the spectacle of contemporary politics, and the technological development of image production techniques with the rise of computer generated imagery. This technological development has allowed us to perceive and shape new worlds, and has trespassed from the surface of the screen to inhabit our physical lives. As the filmmaker, artist and writer Hito Steyerl suggests in her essay Too Much World: Is the Internet Dead?, “reality itself is post-produced and scripted”, meaning that “the world can be understood but also altered by its own tools”.[8]


[1] T.Khue, D. (1999) ‘When Seeing and Hearing Isn’t Believing’, The Washington Post, 1 February. Available at: https://www.washingtonpost.com/gdpr-consent/?destination=%2fwp-srv%2fnational%2fdotmil%2farkin020199.htm%3f&utm_term=.2f3ff52306d3 (Accessed 19 March 2019).

[2] Caulfield, B. (2018) ‘By the Light of the Moon’, Nvidia Blogs 11 October. Available at: https://blogs.nvidia.com/blog/2018/10/11/turing-recreates-lunar-landing (Accessed 19 March 2019).

[3] #HYPERREALCG. Available at: http://hyperrealcg.tumblr.com/ (Accessed 19 March 2019).

[4] O’Reilly, D. (2015), Twitter, 3 March. Available at: https://twitter.com/davidoreilly (Accessed 19 March 2019).

[5] Walker, S. (2017) ‘Russia’s ‘irrefutable evidence’ of US help for ISIS appears to be video game still’, The Guardian 14 November. Available at: https://www.theguardian.com/world/2017/nov/14/russia-us-isis-syria-video-game-still (Accessed 19 March 2019).

[6] Snyder, T. (2018) ‘Fascism is back. Blame the Internet’. Washington Post, 21 May. Available at: https://www.washingtonpost.com/news/posteverything/wp/2018/05/21/fascism-is-back-blame-the-internet/?noredirect=on&utm_term=.c0b577399ab4 (Accessed on 19 March 2019).

[7] Reality Defender website. Available at: http://www.aifoundation.com/responsibility (Accessed on 19 March 2019).

[8] Steyerl, H. (2013) ‘Too Much World: Is the Internet Dead?’. e-flux. November. Available at: https://www.e-flux.com/journal/49/60004/too-much-world-is-the-internet-dead/ (Accessed on 19 March 2019).