NOISEPRISM

3D & XR

Thinkithing v2.0: Interactive Comic Book Data Visualization

November 18, 2021
By Cody Pallo

Lately I’ve been trying to conceptualize useful illusions. Interrelated visual components which are intended to solve practical communication problems in the moment. Ways to covey ideas visually while in the act of talking with the help of AI applying related multimedia. The idea is for efficient knowledge transfer between another person or within a group.

Imagine we were all Harpo Marx and could only relate with an array of things like bike horns and kazoos, only visualize this with symbolism which augment the words I speak instead. All this rolled into a interactive comic book meets media based data visualization interface. It’s a little like a highly evolved messaging app.

The comic interface has a narrative which is me conveying an idea to you in a story like format, you also play a role in the comic frame. It’s an interface we can both see in between us as 2D frames or 3D space. AI organizes information during the talk and displays it for both of us in a compartmentalized and framed way and creates a linear timeline depicting the story of the changes in topic over the course of our conversation in real time. It’s a storyboard which we can edit and swipe through.

In one example, AI might determine we are discussing a topic based on Harpo Marx and may reveal him within a 2D comic frame. One could then interact with and ask the AI version questions about his life and career which he will answer in an animated way.

Imagine this comic as a left to right array of frames on the topics we discuss with branches for non-linear discussions above and below each frame or pasted to the front of the conversation entirely. One could cut, copy, and paste frames or elements of media inside the frame and put them in a container somewhere outside the comic or place them next to you or another thing in the immediate world around them.

In this essay, say I cut the Harpo Marx character from the frame where he was introduced and put him next to a cut out of the bike horn I spoke of after. AI could mesh them and determine a clip from a movie where he used a bike horn to talk and show it as a video in a 2D frame.

Say the AI could translate the word bike horn and kazoo for example and instead of seeing a voice bubble with a list of these two nouns as words, you could opt instead to place all the nouns or illustrations of verbs into two unique containers over the course of this conversation automagically. You could then pull out and attach an object to another object to convey an interrelated point. For example put a noun next to a verb illustration, or say honk the horn yourself interactively for fun or to illustrate that brand of horns sound.

All these concepts add new components to the conversation. Which allow to drill down into a topic, isolate where there is a lack of knowledge and illustrate a better way to understand it. This should hopefully help us bond and make us all a little smarter in the process.