AI & filmmaking

AI, automation and robots are all now entering the world of creating and editing films and videos.

The first subtle signs of this can be seen in the iPhone through the “Memories” feature in the Photos app.

ios11-iphone7-photos-memories-play-memory

Apple Memories

Using video content on a person’s camera roll, Memories can automatically create an intriguing video. Once a Memory is selected, the user has the option to make it shorter or longer and provide “emotional direction,” which will affect the tone and editing to reflect something happy, uplifting, epic, or sentimental (Richter).

It’s easy for AI to come up with something novel just randomly. But it’s very hard to come up with something that is novel and unexpected and useful. – John Smith, Manager of Multimedia and Vision at IBM ResearchMore sophisticated AI video editing examples can be seen on the website Wibbitz, which can, within seconds, automatically create videos based on text that it is fed. Websites that cover breaking news like CNN and Mashable utilize this service to create content to augment their written stories for users who would rather watch a story than read the article.

Magisto is another website that automatically creates edited videos based on users’ uploaded content, which requires the user to pick an emotional direction and then provides them with the ability to make changes by customizing the tempo, transitions and effects.

AI as a director/editor

The process of creating a movie trailer typically takes between ten to thirty days; by using IBM’s Watson, editing time was reduced down to twenty-four hours

Last year, IBM’s supercomputer, Watson, helped a video editor create a trailer for the Hollywood thriller Morgan. IBM researchers started the process by ingesting into Watson over 100 horror film trailers so that it could analyze patterns in sound and visual components, enabling Watson to determine what qualities made for a dynamic trailer.

Watson then produced a list of ten scenes from the movie, totaling six minutes, which it determined were the best for the trailer. A human editor took the six minutes of footage and assembled the selected footage into a coherent story. The process of creating a movie trailer typically takes between ten to thirty days; by using IBM’s Watson, editing time was reduced down to twenty-four hours (Heathman).

Taking things a step further in the filmmaking process, a 2016 film, Impossible Things, featured a script that was written by both AI and humans. The AI agent analyzed data in order to determine which story structure, premise, and plot twists would best resonate with viewer demand. It is intriguing to note that the AI agent determined a bathtub and piano scene were specifically needed in order to make it resonate with the target audience.

An entirely different film, Sunspring, debuted around the same time with the unique credential of having an AI agent called Benjamin (also known as the “automatic script writer”) write the entire script without human intervention. To accomplish this, Benjamin was fed dozens of sci-fi movie and TV scripts, such as Futurama, Star Trek, Stargate: SG1, The Fifth Element, and Ghostbusters (Richter and Newitz). The short film was then created with human actors, filmmakers and editors who followed the AI script to the letter. The result, not surprisingly, was an awkward, unintelligible film that shows the current limits of AI producing creative writing.

Robots + cinematography

KIRA Robot cameras artificial intelligence film 2

KIRA

Have your heard of KIRA yet? KIRA is a robot arm that was developed by Motorized Precision that is quickly changing the game for getting smooth, precise and very complex cinematic camera moves. Check out their reel here to get a taste of what is possible.

Using advancements in AI-powered tracking technology and obstacle avoidance technology, a sci-fi film called In the Robot of Skies was made entirely with autonomous drones doing the filming. This example may provide a glimpse into the future where solo filmmakers on tight budgets can shoot entire projects without needing human camera operator (Richter).

 

AI as an assistant

Creative technologist Sam Snider-Hel shows in his article How I Taught A Machine To Take My Job how he trained AI neural networks to anticipate 3D objects that he wanted modeled and placed in his 3D environment. To accomplish this, he trained an AI system by placing objects in virtual 3D space for five hours. After Snider-Hel generated thousands of images and labels of data from this training exercise, the AI system could identify a pattern of visuals that correlated with his actions.

For example, the AI system identified that whenever Snider-Hel saw a bunch of rocks in his 3D scene, he would next place grass beside them. As a result, the AI system could anticipate and add grass to Snider-Hel’s 3D environment as soon as his mouse got close to the rocks. Snider-Hel speculates that this idea of anticipational design could find its way to Illustrator or Photoshop via a plug-in, so AI could take care of tedious work or even, provided it had enough information, work on the files even after the designer had gone home for the day. This is an early example of what deep learning is capable of when implemented into creative production software.

In the context of a designer’s busy schedule, or when a project comes in with a quick deadline, this idea of training AI based on a designer’s creative actions is compelling. AI could help a designer generate a bunch of ideas as a starting point, or the AI could work on the project while the designer is out of the office so a project will not be turned in late. This might be similar in form to “rendering” in 2D and 3D animation, in which the designer drafts various components in a scene, focusing on the big picture and overall narrative, and letting the computer output a final render in full detail and quality.

AI as a generator of content

An interesting experiment from AI researchers at MIT demonstrated that it is possible to create new video generated by an AI system. Researchers fed short video clips into their AI system, and then the system generated a full one second of video for each clip, based on predictions for what would come next in the scene. The most impressive video clip happened to be one in which the AI system generated the next one second of crashing waves in the ocean. Admittedly, the examples in their current form are crude, resembling what one might see when Photoshop’s Content Aware gets things wrong. But this technology provides an impressive look at what the future holds in automatically generating both still and moving content without needing to go out and physically shoot it.

 

Conclusion

All of these examples provide an interesting look at what the future of filmmaking could look like, especially when you consider the fact that overtime AI gets exponentially better the more data it has access to. So my question then becomes when computers handle a lot of the tasks that creatives are used to doing, what tasks will remain for them to do? What are the skills creatives should be honing? What is the true value of a creative and maker in a world of smart computers that can also do creative work?

Check out my post here for some thoughts on those questions and more.

Let me know in the comments below which of these emerging technologies you find most fascinating. Looking ahead, how do you think AI is going to change the way you might work in the near future?


Let’s be friends!

If you enjoyed this post please consider staying up to date by signing up for my email newsletter and then follow CreativeFuture on Twitter and Facebook.

Posted by Dirk Dallas

Dirk Dallas holds an M.F.A in Graphic Design and Visual Experience from Savannah College of Art and Design. In addition to being a designer, he is also a writer, speaker, educator & the founder of CreativeFuture & From Where I Drone. See what he is up to over on Twitter via @dirka.

Leave a reply

Your email address will not be published. Required fields are marked *