Bob Ross Lives: Difference between revisions
No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
|Print=No | |Print=No | ||
}} | }} | ||
[[File:HeerkoGAN.jpg|thumb]] | |||
In the context of the latest development in deep learning and specifically in Generative Adversarial Networks (GANs), we read a lot about the negative side of these tools being widely accessible. The media focus on harmful deep fakes and their consequences on politics or privacy. The creative potential of visual generative power of the AI is left behind. | In the context of the latest development in deep learning and specifically in Generative Adversarial Networks (GANs), we read a lot about the negative side of these tools being widely accessible. The media focus on harmful deep fakes and their consequences on politics or privacy. The creative potential of visual generative power of the AI is left behind. | ||
Let’s embrace the positive side of the neural networks' ability to generate synthetic imagery and explore the potential of these tools with a diverse group of curious makers. Before large companies take over and colonize this creative space, or it gets distilled into simple entertaining Snapchat filters, we want to invite hackers and designers to get their hands dirty and experiment with the deep learning models that would have been Bob Ross' wet dream! | Let’s embrace the positive side of the neural networks' ability to generate synthetic imagery and explore the potential of these tools with a diverse group of curious makers. Before large companies take over and colonize this creative space, or it gets distilled into simple entertaining Snapchat filters, we want to invite hackers and designers to get their hands dirty and experiment with the deep learning models that would have been Bob Ross' wet dream! |
Revision as of 09:37, 20 February 2020
Bob Ross Lives | |
---|---|
Name | Bob Ross Lives |
Location | NDSM |
Date | 2019/07/23 |
Time | 9:30-16:30 |
PeopleOrganisations | Lenka Hamosova, Pavol Rusnak |
Type | HDSA2019 |
Web | Yes |
No |
In the context of the latest development in deep learning and specifically in Generative Adversarial Networks (GANs), we read a lot about the negative side of these tools being widely accessible. The media focus on harmful deep fakes and their consequences on politics or privacy. The creative potential of visual generative power of the AI is left behind. Let’s embrace the positive side of the neural networks' ability to generate synthetic imagery and explore the potential of these tools with a diverse group of curious makers. Before large companies take over and colonize this creative space, or it gets distilled into simple entertaining Snapchat filters, we want to invite hackers and designers to get their hands dirty and experiment with the deep learning models that would have been Bob Ross' wet dream! The future brings a massive challenge for fast adaptation of our senses to the new visual reality as well as the necessary adaptation of our work methodologies. Formerly impossible is now possible - artificial intelligence can generate photorealistic sceneries, objects, animals and humans that are not part of this physical world. Despite its worrying nature, that it might lead to the state of confusion in communication (distinguishing between what’s real and what’s fake), this technological progress also means unforeseeable advancements in the work of visual makers. It’s a new territory that deserves exploration right now, while it’s still evolving. Especially for creative professionals, this evolution means necessary reimagining of their practice, discovering new methodologies and appropriating unexpected new tools. Video might have killed the radio star - GANs might as well kill a lot of creative visual jobs, but will definitely liberate many from laborious visualising in favor for more conceptual and valuable work!
Technical requirements
- your own laptop (Windows, macOS or Linux are fine)
- install RunwayML from https://runwayml.com/, register your account via the app
- install GIMP from https://www.gimp.org/