The Initial Idea
Originally, The Glimpse was conceived as a radical leap in man-machine relationships. It functioned as a Brain-Computer Interface (BCI) design ecosystem called „DreamTogether™“ (the ™ was put there as a joke, to make it feel more like an already established and widely used platform). The initial idea was Enze's, he wanted to explore what a futuristic design platform for designer would be like. Instead of VR/AR, what would it be like if all you needed to do is close your eyes and think? What would an interface purely „inside one's mind“ be?
The story that was used to set the tone for the project's initial idea goes as follows:
Dream together (literal “thought sharing”) and what it'll be like in changing the design process. > John is a member of a freelance designer group, and today he needs to pitch the idea of a brand new product---the rocket-powered super-car to a group of investors. He drew a few drafts of the form of this super-car to help visualise it better in his head. Then he links his brain to the server of DreamTogether™, closes his eyes and visualizes this car in his head. The readings of his brainwaves courses through the brain-computer link and gets interpreted by the DreamTogether™ AI. When he finishes imagining, he opens his eyes to see exactly what he’d been thinking right on the screen---- well it may not be “exact”, since we don’t see pictures in our head as clearly as actually seeing something. He manually tweaks the parts where he thinks could be improved and then contacts the other members of his designer group. Joel, the engineer of this group, notices some odd parts that are not optimal for production. He downloads the Dreamage® of John and links it to his brain. When he closes his eyes, the brain-computer link stimulates his optic nerves so he’d see the pictures despite having his eyes closed. Then he imagines the changes in his head, as the AI reads his brainwaves and re-generates parts of the Dreamage®. Then, since Joel imagines the entire 3d model of this car, the AI reads it and also generates a model and the animation of the car spinning so the investors can see from all the angles. Then it’s the part where Jason does his job, he’s the one to present it, he needs to think out a presentation to pitch it to the investors in an old-fashioned way. Since most of the service from DreamTogether™ requires a brain-computer link, and requires the user to focus on the inner thoughts and ignore the senses of the outside world, it is done it designated private rooms and not in public spaces, as it might be dangerous to the users. On top of that, rumors say it is possible to map-out the thought process of a person through reverse engineering, and obviously designers don’t want their literal “intellectual property” to be stolen. Jason thinks out an entire slide and a couple more videos to showcase the products. Sadly this time no investors were interested in this idea. But it’s ok! Since John the idea-guy has already came up with something new! He just needs a good night’s rest to restore his brain power, and the team could be ready to present a brand new idea tomorrow evening!
Shifting Directions
The initial idea of „DreamTogether“ was interesting enough to attract 2 more members --- Aizaz and Kai to the team, making us the only team that consists of more than 1 member!
Aizaz and Kai both brought brand new perspectives and design researches to the team. We have had interview with designers that has more experience in speculative design, and fellow design students to ask for their perspective on what kind of AI-integrated design platform they'd like to have in the future.
Then came the day that we needed to make our project more concrete and practical, we each needed to come up with a more detailed pan of how our project should move forward. Enze kept with his original plan of making a designers' futuristic collaboration platform, and decides to base the user interactions through eyeball movement. Kai's plan was something similar to a Black Mirror episode: a smart contact lense that not only has all the functions of a current-day smart phone, but can also record your thoughts real time, can let user enter a mind-palace like Dream realm for deeper studies and imaginations. Aizaz's idea was to instead replace the BCI interface with a holographic one, and utilize external signal readings (e.g. eyeball movement, body temperature etc.) to understand the users thoughts better without using intrusive implants. After discussion, our group ended going with Aizaz's idea, as it is more reasonable to be presented in this tight timeframe, and would appeal to a bigger range of people as it doesn't include any intrusive implants or chips.
Then we narrowed down on what exactly we wanted to do with the „holographic design platform“ concept: It should be portable therefore small in size, few or no buttons (as most of the controls are via the holographic interface), and look futuristic. „Maybe it could look like something similar to a cctv camera“, one of us suggested.
After showing our idea to Professor Klöckner, he suggested we can narrow the scope down further --- instead of a designers' platform that can be used to design anything, it can be more specific : perhaps cars. And we should focus more on the „everyone can have their input“ aspect, instead of making it an „expert's tool“. This way, it would sound more appealing to investors in a real world scenario.
Below are the prototypes and a storyboard we've generated via AI.
The Pitch Deck
Since now that we are sure about what our final design is about --- a holographic car design and co-creation platform where users can make their own imput, we can start making a pitch deck for it.
We decide to start the whole pitch by telling a story and introducing the problem : 'Hav you ever imagined how customizable our world is? You can customize practically everything, your smartphone case, you watch, your suit ….. Except.. your car? „ Yes, cars are still customizable to an extent, the real problem is that it comes with an expensive price tag, and even after paying extra, the customizability is only very limited. But all of that doesn't need to be addressed right at the beginning, it can be clarified later. The beginning part served as a very attention-catching, we even added a little goofy avatar to show all the cool things you can customize, and a very generic plain-old looking car for contrast.
Then we take things more seriously as we dive into the real problem : The actual cost of time and money for car customizations now. We showed our audiences the car customization page of Tesla to show the limitations of most car customizations, and also provide audiences with the data of how much longer it would take for a simple custom paint job and how absurdly it would cost for more intense customization for brands like Rolls Royce and Porsche.
Now we show our audiences the solution --- The Glimpse. It cuts down the time cost of communication by directly letting the users to make their own input, they can shape and mold the car like clay if they want to. Of course, AI assistance will warn them if they make the form too aggressibe and make the structure unstable or un-aerodynamic. The holographic projector can also provide users with test drive simulations. As for cutting down the cost of customization, Professor Klöckner provided us with a link to the homepage of Modix, a large scale 3d printing manufacturer. It is already possible to do cheaper car customization in the present day, we can only imagine how far it can get in the future.
The final video
What should the video be like? A video presentation or a video pitch deck on a product launch? Neither, we decided to make it a „commercial advertisement“. As Professor Klöckner suggested, we tied our platform down to an existing car brand --- BMW, so that we can utilize existing visual styles and materials whlie making audiences feel more familiar to our platform. We wtached quite a lot of BMW commercials on youtube to make sure we understood what style the brand uses.
It took quite a while for us to reach the first video draft that we all agreed on. It was one that had a few comedic elements at the beginning and the end. Sadly as we were generating the seperate segments of the video using AI, we found that we couldn't make the AI understand how we wanted the comedic parts to be generated. We can't waste all of our credits on these 2 scene and sadly had to ditch them. This resulted the final video to be completely „serious“, and with way more style and high-ended-ness then the initial draft. As for the voice-over, we tested free text to voice models, and to be honest, none of them worked well. We need a model that can have the right speed and flow while emphasizing the words we want to emphasize. So instead we found an AI voice changer app and let Enze do the voice over. Enze would read the lines so they'd have the correct flow we needed, and the voice changer would change his voice into Sean Bean's.
Its hard to say if final outcome was better than the original draft, but all of us were very satisfied of how it was in the end. All in all, we'revery glad we took this course, as we learned so much during this rather short semester.