In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
„Fractals of Possibility“ is an experimental exploration of the intersection between artificial intelligence, cinematic storytelling, and human creativity. Rooted in the overarching theme „KI, Kamera, Kopfkino, der menschliche Faktor im Zeitalter der KI-Filmproduktion“ this project delves into how AI reshapes our understanding of film production, narrative creation, and the role of the human imagination. By blending machine-generated visuals, scripts, and concepts with human vision, the project redefines what it means to create in a world where algorithms and creativity coexist.
From the script to the screen, the process interrogates the evolving dynamics between human intuition and computational precision, asking how storytelling, often considered the most human of arts, transforms in the age of AI.
Short note: Because we explored AI Tools, of course every text was also created with the help of AI.
In the initial Experimentation Phase of Fractals of Possibility, our focus was on exploring the capabilities and limitations of AI tools in the context of cinematic storytelling. This phase was dedicated to testing and intersecting various AI models, including Midjourney, Runway, and Krea AI. By combining these tools, we aimed to understand how each model interpreted and generated elements like mood, style, and narrative, pushing the boundaries of visual storytelling.
A key part of this phase was also using After Effects to prepare foundational references, which would later be stylized and transformed by AI. This approach allowed us to craft initial visuals that could serve as a bridge between human-guided design and AI-driven stylization.
The main objectives were:
1. Crossing AI Tool Capabilities: Testing Midjourney, Runway, and Krea AI together to explore their unique strengths and how they complemented each other.
2. Blending AI with Human Techniques: Creating references in After Effects that set a foundation, allowing AI to stylize and add layers of depth.
3. Identifying Key Opportunities and Limits: Observing where each tool excelled or fell short, helping guide our creative direction for the next phases.
This stage established a framework for human-AI collaboration, using AI's creative capabilities to open new pathways for cinematic expression.
The Nikon Skycapture segment of Fractals of Possibility focused on creating a retro-inspired campaign for a fictional camera model, designed to evoke the aesthetic of the 1980s. Using a self-produced reference video, we set out to capture the essence of vintage Nikon advertisements, blending nostalgia with modern AI capabilities. The goal was to reinterpret and stylize an 80s-inspired Nikon layout, using AI-generated camera mockups and stylized video outputs to build an authentic, immersive campaign.
By employing AI tools to generate visuals, mockups, and effects, this step served as a case study in concept visualization. It demonstrated how AI could be used to bring a creative vision to life, turning a basic concept into a fully realized campaign with cohesive style and storytelling.
The key objectives in this phase were:
Capturing a Vintage Aesthetic: Recreating the look and feel of 1980s Nikon ads, balancing nostalgia with digital enhancement.
AI-Driven Mockups and Videos: Using generated camera models and visuals to create an iconic, consistent aesthetic that feels both retro and innovative.
Showcasing Concept Visualization: Illustrating how AI can take a foundational idea and elevate it, transforming raw references into polished, stylized campaign assets.
This stage highlighted the potential of AI for reimagining past aesthetics, proving that AI-generated content can be a powerful tool for concept visualization and creative exploration.
Project Shōsei explores the concept of reimagining and rebranding a lost civilization through the lens of AI-driven design. The project began with a single reference image representing the remnants of a destroyed civilization. This image was then animated using Runway and stylized into two contrasting versions: a dystopian vision of decay and a utopian vision of revival and harmony. Through these dual interpretations, Project Shōsei aimed to demonstrate the vast potential of AI in generating visual diversity and style variations.
The primary objective was to establish a branding identity for this hypothetical civilization, encapsulating its essence and future direction. By leveraging AI for creative exploration, this stage highlighted how machine learning tools can help us envision alternate narratives and aesthetic possibilities for world-building.
The main goals of this phase were:
1. Exploring Visual Variance: Using AI to create dystopian and utopian versions of the same animated scene, showing how style shifts can redefine a civilization’s identity.
2. World-Building through AI: Building a cohesive brand concept for a new civilization, from its mood and visual language to the values implied by each aesthetic.
3. Highlighting AI's Role in Concept Diversity: Demonstrating AI's power to expand creative possibilities, enabling designers to generate multiple stylistic directions from a single reference.
Project Shōsei illustrates how AI can be a powerful ally in world-building and storytelling, offering designers a toolkit to envision entire civilizations with unique identities and narratives.
The „Nike Loki Campaign“ focused on creating a fictional Nike promotion for an imagined adaptive material, designed to respond dynamically to environmental changes. The foundation was a photo of a smart material prototype, which initially had no connection to the Nike brand or campaign. This image was animated using **Runway** and then stylized to fit within the Nike aesthetic, transforming it into the centerpiece of a futuristic concept campaign.
The goal was to visualize how this adaptive material could function and inspire, presenting it as a revolutionary development for athletic performance. Each generated asset showcased the material's unique capabilities—its ability to shift, adapt, and enhance the user experience. To support this vision, custom **icons and logos** were designed to symbolize the material's core attributes, contributing to a cohesive campaign identity. Ultimately, the campaign was positioned as an **ambassador for the potential Summer Olympics of 2036**, hinting at the future of performance wear and intelligent materials in sports.
The key objectives in this phase were:
1. Visualizing Material Functionality: Animating and stylizing each piece of source material to demonstrate the adaptive qualities and flexibility of the smart fabric.
2. Creating Iconography and Branding: Developing unique icons and logos to represent the core characteristics of the material, such as adaptability, flexibility, and resilience.
3. Positioning for a Futuristic Event: Framing the campaign as a potential sponsor for the 2036 Olympics, emphasizing Nike’s vision of advanced, performance-enhancing materials.
The Nike Loki Campaign highlighted how AI-driven design can breathe life into speculative concepts, creating immersive, branded experiences that feel both aspirational and grounded in futuristic innovation.
In the Fractals of Possibility project, three core workflows were developed to leverage AI tools for generating and stylizing visual content, each tailored to specific types of source material. These workflows enabled us to explore diverse creative directions while maintaining a streamlined production process. The final outputs were organized and showcased within a Figma prototype, designed to function as both a presentation and a website for the project.
Workflow 1: Self-Created Reference Video
Process: A rough, self-created reference video was developed and then refined with compositing in After Effects to enhance its visual foundation. This composited video was subsequently stylized in Runway to match the intended campaign aesthetic.
Purpose: This workflow allowed for greater initial control over visual elements, creating a solid base for AI stylization and ensuring alignment with the desired look and feel.
Workflow 2: Real Photo Reference
Process: A high-quality real photo was used as the starting point. This image was animated in Runway to introduce movement and dynamic effects, then further stylized in Runway to transform the photo into a fully realized, AI-enhanced video.
Purpose: By using a real-world image as a foundation, this approach offered a high level of realism and detail, ideal for creating stylized content that still felt grounded in reality.
Workflow 3: AI-Generated Reference with Midjourney
Process: A conceptual reference was generated in Midjourney to set a creative baseline. This generated image was then animated in Runway to add motion, and finally stylized in Runway to bring cohesion and depth to the visuals.
Purpose: This workflow allowed for a quick, flexible generation of starting visuals, ideal for rapid experimentation and iteration, as well as for exploring imaginative, AI-driven styles.
The final project was structured in a Figma prototype that served as both a presentation platform and a digital exhibit. This prototype brought together each workflow's output, presenting the results in an engaging, interactive format that allowed viewers to experience the conceptual narrative and visual journey of Fractals of Possibility.
In conclusion, „Fractals of Possibility“ offered a deep dive into the capabilities and challenges of using AI for video generation. We applied AI tools at every step of the creative process—from storyboarding, moodboarding, animatics, and animation to stylization, music composition, and title animations. While the results were often impressive in both quality and diversity, we encountered significant challenges in terms of time investment and selectivity.
Each video generation required considerable time for both selection and refinement, with only about 1 in 10 outputs meeting our standards for quality and usability. Furthermore, every stylization and generation was highly dependent on the quality of the reference material, emphasizing the importance of a solid foundation for AI-driven processes.
Despite these limitations, the speed and efficiency of rendering, stylization, and other technical tasks were remarkable, showcasing the potential of AI to streamline traditionally time-consuming elements of video production. Moving forward, I see many of the workflows we explored as adaptable, provided that future developments in AI offer greater control and consistency in the creative process.
2 Kommentare
Please login or register to leave feedbackFantastic project!