Art Center MDP
Perceptual Exaptations Part 1:
Hyperreality with Physical Mutation Interface
How can we manipulate/exploit human sensorial perceptions? The adaptations and exaptations of perception through evolution and the rapid increase in technological developments has greatly expanded long since human learned to apply their senses in a primitive way. If the interaction is intuitive, what type of new functions and controls can be introduced to existing senses? The main tasks of vision is to navigate and identify but a good example of an expatiation of the human eye function is communicating emotions through the scaling of the iris. Vision is important because it directly informs us about our environments with objects and spaces, but sometimes alternative ways to retrieve this information are researched and developed for people with various disabilities. My investigation aims to promotes new functions of our perception through the ideas of “sensory fusion”, “sensory substitution”, and “sensory hijack”. Many of the projects dealing with sensory substitution are designed for the blind and people with disabilities, when these new affordances are applied to a fully functional person, how can these activities inform new interactions embed the notion of discovery and stimulating imagination? The contents of my experiments are inspired by neurology, cognition, gestalt psychology, with apprehensions through phenomenological and biological factors. The vehicle of communicating these hybrids of ideas are executed through physical animation through tangible parts driven by motors and servos to move electronic devices which are inspired by structural language and techniques used in visual effects and motion graphics.
Motion graphics and visual effects mediates the world of fantasy and reality. For centuries, animated graphic representations and information displaying in a scripted scenario have been a tradition for storytelling, enhancing the visual and cognitive perception allowing the viewer suspend their disbelieve and immerse in the scripted fictional world. In the entertainment world this hyperreality are incorporated, from stage performance, silent films, to television and cinema. In an industrial design scheme, old clock works, automatons, mechanics to drive these animations requires mathematical calculations and the art of craft making. Visual effects are mediated in many ways, although one particular aspect of morphology that continuously incorporates a mesmerizing effect is through transformation and mutation of forms over a predetermined period of time. When one objects structurally alters itself to another representation, it instigates curiosity in visual perception and the phenomenology of understanding. If this mutational ways of animation translates and represent in tangible objects, how can it give the user the affordance as an interface? While operating this shape changing device, the user will require not only using their sight, but also touch and motor coordination. What type of algorithmic system within these mutational devices can promote understanding, discovery in conjunction to operate as an interactive physical user interface?
Historical / Context
Hyperreality comes from layers of creative imagination. A humanistic desire for a different world. A world in which varies by experiential interpretation creatively expressed in a system of simulated experiences. The reality are mediated to an extent that can no longer be distinguished from fantasy. Take Disneyland for example, as Jean Baudrillard puts it, “An imaginary effect concealing that reality no more exists outside than inside the bounds of the artificial perimeter”. (Baudrillard,1). Baudrillard defines three orders of simulation. First would be where the representation of the real world is artificially represented in different mediums such as books, painting, map, etc. The second-order simulation blurs the boundaries between reality and representation. The third order describes the simulation and it’s surroundings which he calls hyperreality. It is produced algorithmically like the computer programing code to construct virtual reality or augmented experiences. Baudrillard believes that hyperreality will dominate the way of experiencing and understanding the world we live in. (Lane, 2).
Visual perception isn’t a passive act, rather it’s an active learning experience. Human learns to see and understand the data perceived through active body motor movement to understand the world around us. It involves the integration of multi-sensorial processing ranging from our senses and our muscles to coordinate with the neurological brain function. It is a fundamental understanding in which animation needs to be constantly moving to feel a continuous flow, even if it’s a static frame, the object or text will still be drifting in slow motion. Therefore, translating these techniques into tactile devices will also require movement for a better understanding and organic spacial relationship between the object and the person.
Many motion graphics and visual effect techniques originated from theater and stage art. Some example reference such as artificial weight goes as far back as Pantomime performances from the 1700s. As entertainment advances with technology, so does the effects to enhance visual perception.
Tactility isn’t only about touching from the tip of the fingers, crafters such as sculptors and pottery artists requires the use every aspect of their hands, from the palm, nails, multiple fingers, to the back of their hands or even other parts of their limbs and torso. By using sensitive touches and feel in conjunction with a keen observation of form, to carefully perfect the craft.
Many hand made craft working skills prior to the Industrial Revolution has been lost or eliminated. Many of these detailed and elaborate hand-made skills makes objects and architecture more humanistic. Every duplication is different with it’s unique imprints left by the maker, even with decades of experience, every hand-made craft still have it’s variations. It makes each item a unique one-of-a-kind. Since the invention of the assembly line manufacturing process, every mass produced item looks identical. Sure it lowers the cost to purchase these items, but each item is cold and lifeless.
After we moved into the digital era, there is a major dehumanization with all the devices and tools. Everything is mass produced to give a digital affordance. Limiting our senses to perceiving information on a flat screen. Most tactility within the interface in interaction revolves around only the tip of our fingers. Keyboards and mouse has made humans as an extension of the machine itself, making us as part of the digital system to keep the world operational.
In the next phase of inventions will be a merge between the tactility of craft and digital affordances. Every control on the digital touch screen can be re-designed digitally with tangible objects without a screen.
Tangible user interfaces provides affordances to digital information while facilitates the advantages of human’s capability to grasp and manipulate physical objects in the real world. (Ishii, 3). In an environment full of activated devices which implies rich contents and interactions that are embedded in tangible things and inhabitable spaces will assemble a new symbiotic system with ongoing relationship human and their environment. This new ecology of things provides an evolving system that can be interpret and influenced by the interactions and decisions by people or other objects. (Allen, 4).
My goal through these experiments are to translate the notion of motion graphic and visual effects animation and embed these forms and movements into physical objects, devices, and environments. These project explorations are initiated through a matrix chart that I have put together. One side of the grid being my ingredients and definition of every visual effects language applied. The other side of the grid includes eye tracking, other sensorial input sensors (breath, heat, proximity, force, bend, touch), gestalt/cognitive psychology, and neurology tests/theories. By infusing the cross section of two specific directions, the result provide informations which allows me to further develop innovative ideas to promote new functions for various human senses. To retain the value of these devices, they will be speculative tools containing the ideas of manipulation, interpretation, discovery, and stimulating imagination.
During my investigation, many contextual information comes from studying neurology and psychology of human perception. Not only does the human brain interprets human sensorial inputs through interactions with it’s environment, but it is constantly creating gestalts from incomplete information in attempt to make connections and logical sense of the abstraction. (Ramachandran, 3) This information can derive from the foveating vision, far periphery of vision, touch and feel, or any senses of the human body. Understanding the way our eyes and brain works as well as the phenomenological comprehension has greatly impacted my decisions to create devices that puts a twist to the human perception.
Physically Animated Motion Graphics Immersive Space is the first experiment during this investigational process. If the text and graphic elements have volume which occupies physical space, how can these elements give a similar experience inspired by television commercial and film title sequence? My approach in this investigation utilizes several physical graphic elements connected to ropes driven by multiple pulleys. This installation is presented in a dark space with illuminated graphic elements. By sitting or standing in a designated position, the viewer can experience a pre-scripted animation as these graphic elements move into the periphery in sequence.
Saccade Controlled Visual Angle, there is two versions to this project. Both controlled and view through an eye-tracking glasses that includes a hacked PS3 infrared webcam for the left eye and a LCD monitor mounted for the right eye. The LCD monitor is mounted on a bracket driven by two feather weight servos to allow a two axis rotation to follow the focus of the pupil. The main task of the eye-tracking is to control another physical device that holds a viewing camera for the LCD screen. The biggest difference between these two version is how the viewing camera was mounted.
The first iteration of this experiment is called “Extended Sight”, where the viewing camera is mounted on a small robotic bracket driven by two standard size servos to allow x and y axis rotations for the camera to look around the outside world. If the human eye sight can be physically moved to another location, what perspective can the viewer perceive?
The second version takes on the metaphor of jump cut techniques in film editing to see different angles. When a person gaze at something or someone, naturally the eye focus would constantly jump back and forth, scanning the contours and following while identifying various information of the subject. These quick eye movements are call Saccades. Although our brain interprets these saccades to be seamless, but biologically we are blind between these saccade points. The glasses mount eye tracking camera controls two semicircular acrylic armature driven by two servos that rotate in x and y axis. The main purpose of this structure is to allow the viewing camera to rotate 180 degrees around an object that can be placed on a central platform. Viewing the object in various angles. Due to the scale constraints, he maximum size that this specific prototype will allow to revolve around is 6in x 8.5in.
Super Hero Gaze Telekinesis takes on the idea of visual effects in film when the super hero has the power to control and manipulate things with their eyes. The controlling interface of this experiment has it’s similarities with the Saccade Controlled Visual Angle project. It also utilizes an eye-tracking glasses to control a flash light driven by two axis servo motors. As the user look in different directions, the flash light points at the direction the viewer is looking at. The purpose of this setup is to open up a physical box while looking at it. The box is constructed with a light sensor on the top center. As the light triggers the photocell, a servo underneath the box will pull on four strings that are attached to all four sides of the box and pulls the four sides like flaps folding downwards in a synchronized motion. This experiment investigates using the gaze of the human eye to trigger physical transformation of objects in the real world. What if everything in the world can be activated and controlled by just looking at it?
Physical Mutation Interface is an interactive morphing interface inspired by the ideas of organic mutation, morphing, and Cognitive visual angle. It is a speculative experiment that maybe suggestive for an interactive physical interface that can be implemented for consumer electronics. What if the form, function and interface can be changed by a simple gesture of wrist rotation? This stand-alone device is driven by a 3-axis accelerometer, controlled by an Arduino board, which triggers three servo motors, one for each axis of rotation. The servo arms are attached with strong skeletal armatures that rotates in their own independent directions without getting in the way of each other. This device is then wrapped in cloth to hide the mechanics within to produce a morphing look and feel from one shape to another, creating multiple and interchangeable surfaces. This interface provides new modes of usage as the panel changes and constrain functions to each display.
The experiments I have created so far investigates two major directions. One is a perceptual expatiation of the eye, allowing control and manipulating the user’s own perception. The other is creating mutating physical interfaces, allowing the user to manipulate and influence the form and the outcomes of the device. Both has their technological limitations and the interactions are quite simple and on the surface level.
Due to the use of a motion tracker for the eye-tracking device, the location of the iris isn’t always accurate because it requires movement for the tracker to detect the location of the pupil. This creates a problem which the tracker only works a little more than half of the time while the user is trying to control the device. If I were to further develop the possibilities of eye tracking to manipulate vision, I will look into a better software for eye tracking.
The physical mutation interface is still in very rough initial stage. The interactions are very straight forward, only allows for one or two manipulation of it’s form and yet the interactions and affordances on the human receptive side is not complex enough to create productive and variety personal interpretations. The importance of the physical mutation interface is not the result and outcome but more about the process of the interactions.
Plans for Spring Term
Over the Christmas break and next semester, I plan to further investigate the idea of mutating/transforming physical objects as interfaces. How will these operate in a more in-depth matter? Giving complexity to the interactions and it’s algorithms to promote a productive interaction that provides various results among different users and create personalized interpretations. Not only investigate the interactions of these mutating interfaces between object and human, but also between objects and objects. How will an ecology of mutational objects communicate and/or influence each other?
1. Baudrillard, Jean, Simulacra and Simulation (The Body, In Theory: Histories of Cultural Materialism), University of Michigan Press, February 15, 1995.
2. Lane, Richard J., Jean Baudrillard, Routledge; 2nd edition, January 16, 2009.
3. Ramachandran, M.D., Blakeslee, Sandra, Phantoms In The Brain, Harper Perennial, 1999.
4. Ishii, Hiroshi, Tangible Bits: Beyond Pixels, Tangible Media Group at MIT Media Laboratory, February 18-20 2008.
5. Allen, Philip V., The New Ecology of Things, Media Design Program at Art Center College of Design; Limited edition, April 16, 2007.