THE SPLIT EXPERIENCE
When a person gaze towards something or someone, naturally the eye’s focus would constantly jump back and forth, scanning and following contours while identifying various information of the subject. These quick eye movements are call Saccades. Although our brain interprets these saccades to be seamless, but biologically we are blind between these saccade points.
This experiment is the next iteration of my previous project “Extended Sight”, it takes on the metaphors of the saccade blindness in conjunction with jump cut techniques to display different angles of one scene in films. This device consists of two parts, a glasses mount eye tracking camera which controls two semi-circular acrylic armature driven by two servos that rotate in x and y axis. The main purpose of this structure is to mount a viewing camera, so the camera can rotate 180 degrees around a object that can be placed on a central platform. Due to the scale constraints, he maximum size that this specific prototype will allow to revolve around is 6in x 8.5in. The display screen of this viewing camera is mounted on the right eye of the glasses. Therefore, when the user is wearing this glasses, the left eye is used for controlling the viewing camera, and the right eye is used for looking at the display of that camera. Due to my technological constraints of the type of viewing screen, the only way to be able to foveate on the lcd display screen while the left eye is being tracked, is by rotating the screen in the same direction the pupil is moving. This requires an additional two axis bracket driven by two feather weight servos to turn the display towards the same direction that the eyes is pointing at.
Through this device, the eyes are no longer used only for navigation and identification, but it adds a new exaptative function of control. It takes the viewer out of their normal perspective and see things in a whole new way.
Progressively reveal layers of capabilities
If physical objects can provide abilities to control and navigate through algorithmic systems, what are ways to hide and reveal features to help isolate it’s functions and capabilities? This experiment investigates how complex system can be hidden within a simple form and how it can progressively reveal layers of capabilities through user interaction. When certain parts are initiated, it physically animates and transforms to display or hide these features. This system provide constrains oppose to layout out all the features and controls on a single display.
- The goal is to design a system ithat s complex, but initially presents itself simple. so different functions are revealed as needed.
- Initial interface presented in it’s raw form
- Progressively revealing layers of capabilities
- Isolate functions and introduce new feature
- If with other objects, it will change it’s form to interact with other object.
- Ideally when every side of this object can open, it can provide a perpetual sense of discovery during use.
Left object is in it’s initial activated state and the right object in it’s natural raw state
Another image of the object in it’s initial activated state
object of this experiment in it’s full display mode after all switches have been initiated
During the making phase, I’ve tested on several raw geometric forms and ended up settling on a polyhedron. This is the first prototyping phase, so the quick mockup is made with cardboards.
Placing hall effect sensors inside the prototype
Because for this experiment I’m using magnet sensors, I had to rig this glove with magnets on all fingertips to get a sense of it’s interaction. This image shows the glove flipped inside out to hotglue the magnet in for future removal.
This is a initial sketch of how this object may interact with other objects to form a larger entity.
These are some initial sketches about this idea and how a polygon may change forms.
Tactile Organic Mutational Interface
This is the next iteration of my original mutation interface except it isn’t driven by accelerometer or servos. The form of this tactile interface is purely organic. Which makes it look blooby so it fits nicely in your hand. It starts out neutral shape like a sphere or ellipse, and as the user squeeze, push, twist and grip the device, it changes it’s form to provide new features. These features are yet to be defined. The form also protrude itself with bumps at various points as indicators to notify new affordances to the user. Since the goal of this device is to operate purely through touch without the need to see it, it is perfect for controls or perform tasks when the user’s sights or even other senses are occupied. For example in a driving scenario, it maybe detrimental to take your eyes off the road, so the user can use this device without having to look at it.
Physical Interface Changing Over Time
Temporal Interaction: This experiment investigates how physical interface changes over time can introduce new ways of interaction. Short maybe a couple of hours, long maybe as much as a weeks. I envision it’s form changes on it’s own and also through user interaction. This metaphore for this type of object and user relationship is very similar to crafting and caring for a Japanese Banzai tree. Through the user’s tentative attention to care for and craft this object, it begins to develop personal value through each individual user. Although the interaction part is still very much on the surface level, but this one is very interesting to me. What causes the form to change over time is something I still have to work out. It maybe a set of data from the internet, or even by personal encounters or experiences.
- changes through human interactions
- changes over time on it’s own
- introduce new ways of interaction as the form changes
- slower interface change. over time, in the scale of days.
- changes base on how it’s used, or data coming from somewhere else
- builds a closer relationship with the user through investing time and efforts over a long period of time.
Other Possible Form Iterations
Art Center MDP
Perceptual Exaptations Part 1:
Hyperreality with Physical Mutation Interface
How can we manipulate/exploit human sensorial perceptions? The adaptations and exaptations of perception through evolution and the rapid increase in technological developments has greatly expanded long since human learned to apply their senses in a primitive way. If the interaction is intuitive, what type of new functions and controls can be introduced to existing senses? The main tasks of vision is to navigate and identify but a good example of an expatiation of the human eye function is communicating emotions through the scaling of the iris. Vision is important because it directly informs us about our environments with objects and spaces, but sometimes alternative ways to retrieve this information are researched and developed for people with various disabilities. My investigation aims to promotes new functions of our perception through the ideas of “sensory fusion”, “sensory substitution”, and “sensory hijack”. Many of the projects dealing with sensory substitution are designed for the blind and people with disabilities, when these new affordances are applied to a fully functional person, how can these activities inform new interactions embed the notion of discovery and stimulating imagination? The contents of my experiments are inspired by neurology, cognition, gestalt psychology, with apprehensions through phenomenological and biological factors. The vehicle of communicating these hybrids of ideas are executed through physical animation through tangible parts driven by motors and servos to move electronic devices which are inspired by structural language and techniques used in visual effects and motion graphics.
Motion graphics and visual effects mediates the world of fantasy and reality. For centuries, animated graphic representations and information displaying in a scripted scenario have been a tradition for storytelling, enhancing the visual and cognitive perception allowing the viewer suspend their disbelieve and immerse in the scripted fictional world. In the entertainment world this hyperreality are incorporated, from stage performance, silent films, to television and cinema. In an industrial design scheme, old clock works, automatons, mechanics to drive these animations requires mathematical calculations and the art of craft making. Visual effects are mediated in many ways, although one particular aspect of morphology that continuously incorporates a mesmerizing effect is through transformation and mutation of forms over a predetermined period of time. When one objects structurally alters itself to another representation, it instigates curiosity in visual perception and the phenomenology of understanding. If this mutational ways of animation translates and represent in tangible objects, how can it give the user the affordance as an interface? While operating this shape changing device, the user will require not only using their sight, but also touch and motor coordination. What type of algorithmic system within these mutational devices can promote understanding, discovery in conjunction to operate as an interactive physical user interface?
Historical / Context
Hyperreality comes from layers of creative imagination. A humanistic desire for a different world. A world in which varies by experiential interpretation creatively expressed in a system of simulated experiences. The reality are mediated to an extent that can no longer be distinguished from fantasy. Take Disneyland for example, as Jean Baudrillard puts it, “An imaginary effect concealing that reality no more exists outside than inside the bounds of the artificial perimeter”. (Baudrillard,1). Baudrillard defines three orders of simulation. First would be where the representation of the real world is artificially represented in different mediums such as books, painting, map, etc. The second-order simulation blurs the boundaries between reality and representation. The third order describes the simulation and it’s surroundings which he calls hyperreality. It is produced algorithmically like the computer programing code to construct virtual reality or augmented experiences. Baudrillard believes that hyperreality will dominate the way of experiencing and understanding the world we live in. (Lane, 2).
Visual perception isn’t a passive act, rather it’s an active learning experience. Human learns to see and understand the data perceived through active body motor movement to understand the world around us. It involves the integration of multi-sensorial processing ranging from our senses and our muscles to coordinate with the neurological brain function. It is a fundamental understanding in which animation needs to be constantly moving to feel a continuous flow, even if it’s a static frame, the object or text will still be drifting in slow motion. Therefore, translating these techniques into tactile devices will also require movement for a better understanding and organic spacial relationship between the object and the person.
Many motion graphics and visual effect techniques originated from theater and stage art. Some example reference such as artificial weight goes as far back as Pantomime performances from the 1700s. As entertainment advances with technology, so does the effects to enhance visual perception.
Tactility isn’t only about touching from the tip of the fingers, crafters such as sculptors and pottery artists requires the use every aspect of their hands, from the palm, nails, multiple fingers, to the back of their hands or even other parts of their limbs and torso. By using sensitive touches and feel in conjunction with a keen observation of form, to carefully perfect the craft.
Many hand made craft working skills prior to the Industrial Revolution has been lost or eliminated. Many of these detailed and elaborate hand-made skills makes objects and architecture more humanistic. Every duplication is different with it’s unique imprints left by the maker, even with decades of experience, every hand-made craft still have it’s variations. It makes each item a unique one-of-a-kind. Since the invention of the assembly line manufacturing process, every mass produced item looks identical. Sure it lowers the cost to purchase these items, but each item is cold and lifeless.
After we moved into the digital era, there is a major dehumanization with all the devices and tools. Everything is mass produced to give a digital affordance. Limiting our senses to perceiving information on a flat screen. Most tactility within the interface in interaction revolves around only the tip of our fingers. Keyboards and mouse has made humans as an extension of the machine itself, making us as part of the digital system to keep the world operational.
In the next phase of inventions will be a merge between the tactility of craft and digital affordances. Every control on the digital touch screen can be re-designed digitally with tangible objects without a screen.
Tangible user interfaces provides affordances to digital information while facilitates the advantages of human’s capability to grasp and manipulate physical objects in the real world. (Ishii, 3). In an environment full of activated devices which implies rich contents and interactions that are embedded in tangible things and inhabitable spaces will assemble a new symbiotic system with ongoing relationship human and their environment. This new ecology of things provides an evolving system that can be interpret and influenced by the interactions and decisions by people or other objects. (Allen, 4).
My goal through these experiments are to translate the notion of motion graphic and visual effects animation and embed these forms and movements into physical objects, devices, and environments. These project explorations are initiated through a matrix chart that I have put together. One side of the grid being my ingredients and definition of every visual effects language applied. The other side of the grid includes eye tracking, other sensorial input sensors (breath, heat, proximity, force, bend, touch), gestalt/cognitive psychology, and neurology tests/theories. By infusing the cross section of two specific directions, the result provide informations which allows me to further develop innovative ideas to promote new functions for various human senses. To retain the value of these devices, they will be speculative tools containing the ideas of manipulation, interpretation, discovery, and stimulating imagination.
During my investigation, many contextual information comes from studying neurology and psychology of human perception. Not only does the human brain interprets human sensorial inputs through interactions with it’s environment, but it is constantly creating gestalts from incomplete information in attempt to make connections and logical sense of the abstraction. (Ramachandran, 3) This information can derive from the foveating vision, far periphery of vision, touch and feel, or any senses of the human body. Understanding the way our eyes and brain works as well as the phenomenological comprehension has greatly impacted my decisions to create devices that puts a twist to the human perception.
Physically Animated Motion Graphics Immersive Space is the first experiment during this investigational process. If the text and graphic elements have volume which occupies physical space, how can these elements give a similar experience inspired by television commercial and film title sequence? My approach in this investigation utilizes several physical graphic elements connected to ropes driven by multiple pulleys. This installation is presented in a dark space with illuminated graphic elements. By sitting or standing in a designated position, the viewer can experience a pre-scripted animation as these graphic elements move into the periphery in sequence.
Saccade Controlled Visual Angle, there is two versions to this project. Both controlled and view through an eye-tracking glasses that includes a hacked PS3 infrared webcam for the left eye and a LCD monitor mounted for the right eye. The LCD monitor is mounted on a bracket driven by two feather weight servos to allow a two axis rotation to follow the focus of the pupil. The main task of the eye-tracking is to control another physical device that holds a viewing camera for the LCD screen. The biggest difference between these two version is how the viewing camera was mounted.
The first iteration of this experiment is called “Extended Sight”, where the viewing camera is mounted on a small robotic bracket driven by two standard size servos to allow x and y axis rotations for the camera to look around the outside world. If the human eye sight can be physically moved to another location, what perspective can the viewer perceive?
The second version takes on the metaphor of jump cut techniques in film editing to see different angles. When a person gaze at something or someone, naturally the eye focus would constantly jump back and forth, scanning the contours and following while identifying various information of the subject. These quick eye movements are call Saccades. Although our brain interprets these saccades to be seamless, but biologically we are blind between these saccade points. The glasses mount eye tracking camera controls two semicircular acrylic armature driven by two servos that rotate in x and y axis. The main purpose of this structure is to allow the viewing camera to rotate 180 degrees around an object that can be placed on a central platform. Viewing the object in various angles. Due to the scale constraints, he maximum size that this specific prototype will allow to revolve around is 6in x 8.5in.
Super Hero Gaze Telekinesis takes on the idea of visual effects in film when the super hero has the power to control and manipulate things with their eyes. The controlling interface of this experiment has it’s similarities with the Saccade Controlled Visual Angle project. It also utilizes an eye-tracking glasses to control a flash light driven by two axis servo motors. As the user look in different directions, the flash light points at the direction the viewer is looking at. The purpose of this setup is to open up a physical box while looking at it. The box is constructed with a light sensor on the top center. As the light triggers the photocell, a servo underneath the box will pull on four strings that are attached to all four sides of the box and pulls the four sides like flaps folding downwards in a synchronized motion. This experiment investigates using the gaze of the human eye to trigger physical transformation of objects in the real world. What if everything in the world can be activated and controlled by just looking at it?
Physical Mutation Interface is an interactive morphing interface inspired by the ideas of organic mutation, morphing, and Cognitive visual angle. It is a speculative experiment that maybe suggestive for an interactive physical interface that can be implemented for consumer electronics. What if the form, function and interface can be changed by a simple gesture of wrist rotation? This stand-alone device is driven by a 3-axis accelerometer, controlled by an Arduino board, which triggers three servo motors, one for each axis of rotation. The servo arms are attached with strong skeletal armatures that rotates in their own independent directions without getting in the way of each other. This device is then wrapped in cloth to hide the mechanics within to produce a morphing look and feel from one shape to another, creating multiple and interchangeable surfaces. This interface provides new modes of usage as the panel changes and constrain functions to each display.
The experiments I have created so far investigates two major directions. One is a perceptual expatiation of the eye, allowing control and manipulating the user’s own perception. The other is creating mutating physical interfaces, allowing the user to manipulate and influence the form and the outcomes of the device. Both has their technological limitations and the interactions are quite simple and on the surface level.
Due to the use of a motion tracker for the eye-tracking device, the location of the iris isn’t always accurate because it requires movement for the tracker to detect the location of the pupil. This creates a problem which the tracker only works a little more than half of the time while the user is trying to control the device. If I were to further develop the possibilities of eye tracking to manipulate vision, I will look into a better software for eye tracking.
The physical mutation interface is still in very rough initial stage. The interactions are very straight forward, only allows for one or two manipulation of it’s form and yet the interactions and affordances on the human receptive side is not complex enough to create productive and variety personal interpretations. The importance of the physical mutation interface is not the result and outcome but more about the process of the interactions.
Plans for Spring Term
Over the Christmas break and next semester, I plan to further investigate the idea of mutating/transforming physical objects as interfaces. How will these operate in a more in-depth matter? Giving complexity to the interactions and it’s algorithms to promote a productive interaction that provides various results among different users and create personalized interpretations. Not only investigate the interactions of these mutating interfaces between object and human, but also between objects and objects. How will an ecology of mutational objects communicate and/or influence each other?
1. Baudrillard, Jean, Simulacra and Simulation (The Body, In Theory: Histories of Cultural Materialism), University of Michigan Press, February 15, 1995.
2. Lane, Richard J., Jean Baudrillard, Routledge; 2nd edition, January 16, 2009.
3. Ramachandran, M.D., Blakeslee, Sandra, Phantoms In The Brain, Harper Perennial, 1999.
4. Ishii, Hiroshi, Tangible Bits: Beyond Pixels, Tangible Media Group at MIT Media Laboratory, February 18-20 2008.
5. Allen, Philip V., The New Ecology of Things, Media Design Program at Art Center College of Design; Limited edition, April 16, 2007.
How can we manipulate/exploit human sensorial perceptions? The adaptations and exaptations of perception through evolution and the rapid increase in technological developments has greatly expanded long since human learned to apply their senses in a primitive way. If the interaction is intuitive, what type of new functions and controls can be introduced to existing senses? The main tasks of vision is to navigate and identify but a good example of an expatiation of the human eye function is communicating emotions through the scaling of the iris. Vision is important because it directly informs us about our environments with objects and spaces, but sometimes alternative ways to retrieve this information are researched and developed for people with various disabilities. My thesis explorations investigates and promotes new functions of our perception through the ideas of “sensory fusion”, “sensory substitution”, and “sensory hijack”. Many of the projects dealing with sensory substitution are designed for the blind and people with disabilities, when these new affordances are applied to a fully functional person, how can these activities inform new interactions embed the notion of discovery and stimulating imagination? The contents of my experiments are inspired by neurology, cognition, gestalt psychology, with apprehensions through phenomenological and biological factors. The vehicle of communicating these hybrids of ideas are executed through physical animation through tangible parts driven by motors and servos to move electronic devices which are inspired by structural language and techniques used in visual effects and motion graphics. My main focus is around exploiting visual perception, whether it is the understanding of seeing or the physical act of looking, but I am also interested in exploring other sensorial perceptions such as touch, feel, breath, sound. Visual perception isn’t a passive act, rather it’s an active learning experience. Human learns to see and understand the data perceived through active body movement to understand the world around us. My project explorations are initiated through a matrix chart that i have put together. One side of the grid being my ingredients and definition of every visual effects language applied. The other side of the grid includes eye tracking, other sensorial input sensors (breath, heat, proximity, force, bend, touch), gestalt/cognitive psychology, and neurology tests/theories. My investigations will provide informations which will allow me to further develop innovative ideas to promote new functions for existing senses. To retain the value of these devices, they will be speculative tools containing the ideas of manipulation, interpretation, discovery, and stimulating imagination.
My obsession began from studying motion graphics in my undergraduate program at Art Center with a handful of years working in the industry prior to returning to grad school. Through this experience, I have developed an interest in the way visual perception works. With the interest in gestalt and cognitive psychology, I have began to study the neurology of visual perception as well as disorders such as synthesis and various blindness (color, face, object, motion). My hobby while growing up however, revolves around auto mechanics, and craft works using materials such as metal, wood and plastic. With the hybrid of my professional background of motion graphics and my hobby in mechanics, my interests have emerged into building tangible interface, apparatus, and installations inspired by the techniques and languages of visual effects.
Physical Interactive Form Follows Function
Keywords: Organic Mutation, Morphing, Interactive Physical Interface, Cognitive Visual Angle, Form Follows Function Physically
Question: What if the form, function and interface can be changed by a simple gesture of wrist rotation?
Summary: Mutation Interface is an interactive morphing interface inspired by the ideas of organic mutation, morphing, and Cognitive visual angle. It is a speculative experiment that maybe suggestive for a interactive physical interface that can be implemented for consumer electronics. This stand-alone device is driven by a 3-axis accelerometer, controlled by an Arduino board, which triggers three servo motors, one for each axis of rotation. The servo arms are attached with strong skeletal armatures that rotates in all directions without getting in the way of each other. This device is then wrapped in cloth to hide the mechanics within to produce a morphing look and feel from one shape to another, creating multiple and interchangeable surfaces.
this video demonstrates the way this device moves with the 3 axis acceleronmeter
begin prototyping and making it work
cleaned up the wires
Extended Sight + PAMGIS
(Physical Animated Motion Graphics Immersive Space)
Keywords: Immersive Physical Animation Data Visualizer, Saccade, Jump cut editing, viewing from different angles, Eye tracking camera control.
Question: What if the human eye sight can be relocated to another part of the body or another part of the world?
There is two parts to this project. Extended Sight explores the idea of moving a person’s eye sight to another location. This can be another location on the body as part of another limb or any other body part, or remotely at another location. This provokes the question if our eye sight is removed from our head and placed somewhere else, can we see from different perspectives? This project can also inform ideas for story telling. If there are multiple amounts of these eye controlled cameras are placed in different countries to tell different narratives happening at the same time, how can the footage and data collected inform a new way of story telling?
PAMGIS (Physical Animated Motion Graphics Immersive Space)
By placing this camera inside an enclosed box with physical graphic and informational elements strategically positioned, this immersive physical data visualization installation designed to recreate the experience of watching a motion graphics animation in a physical space. From a first person point of view in conjunction with eye tracking controlled camera placed in this space, the viewer is given the affordance to navigate through the text and graphical information freely. The space is designed to reverse engineer the way animation is produced in a scripted environment. As the viewer pans around the camera and bringing graphic elements into frame, in the viewfinder the look of the animation will be inspired by the way motion graphics feel as if they are animated on screen.
TRAILS: Super Hero Power Telekinesis
Synchronization, Transformation, Visual Search Selection, Visuospatial function
Summary: This experiment explores the idea of Visuospatial function, Object Transformation, Multiplicity Synchronization, and Visual Scanning. Utilizing the eyetracking technology as a control to analyze eye movement, which the scanning movement triggers the opening of a box. The box is inspired by transforming mechanical parts. This single function box is then duplicated in a mass amount organized as a grid format. As the result, the boxes open in synchronized motion as the eye pan across and looks around.
Question Raised: When a physical transformation in mass quantities is beautifully synchronized to create a patterned trail, what can of new functions and feelings can it inform other than a mesmerizing effect?
This experiment began from using eye tracking and stage use spot light as a metaphore to create an eye-controlled car headlight. At this early stage, I was experimenting while creating something practical rather than speculative and provocative. Here’s a video to demo how the car headlight works to help amplify the clarity of the driver’s view.
Super Hero Power Telekinesis
Taking the idea further and add multiplication of the transformation box idea and eye tracking controlled spot light. The visual effects metaphore this idea is inspired by comes from super heros from comics or movies often have some special power that allows them to manipulate an object with their mind, eyes, or other sensory input. Put that in conjunction with the visual effects language of multiplicity and synchronization, I put together this after effects animation to demo how the installation would work as your eyes move across the peripheral view.