Thesis Paper Draft v.01

Link Huang
Art Center MDP
Thesis 2011

Perceptual Exaptations Part 1:
Hyperreality with Physical Mutation Interface

Introduction
How can we manipulate/exploit human sensorial perceptions? The adaptations and exaptations of perception through evolution and the rapid increase in technological developments has greatly expanded long since human learned to apply their senses in a primitive way. If the interaction is intuitive, what type of new functions and controls can be introduced to existing senses? The main tasks of vision is to navigate and identify but a good example of an expatiation of the human eye function is communicating emotions through the scaling of the iris. Vision is important because it directly informs us about our environments with objects and spaces, but sometimes alternative ways to retrieve this information are researched and developed for people with various disabilities. My investigation aims to promotes new functions of our perception through the ideas of “sensory fusion”, “sensory substitution”, and “sensory hijack”. Many of the projects dealing with sensory substitution are designed for the blind and people with disabilities, when these new affordances are applied to a fully functional person, how can these activities inform new interactions embed the notion of discovery and stimulating imagination? The contents of my experiments are inspired by neurology, cognition, gestalt psychology, with apprehensions through phenomenological and biological factors. The vehicle of communicating these hybrids of ideas are executed through physical animation through tangible parts driven by motors and servos to move electronic devices which are inspired by structural language and techniques used in visual effects and motion graphics.
Motion graphics and visual effects mediates the world of fantasy and reality. For centuries, animated graphic representations and information displaying in a scripted scenario have been a tradition for storytelling, enhancing the visual and cognitive perception allowing the viewer suspend their disbelieve and immerse in the scripted fictional world. In the entertainment world this hyperreality are incorporated, from stage performance, silent films, to television and cinema. In an industrial design scheme, old clock works, automatons, mechanics to drive these animations requires mathematical calculations and the art of craft making. Visual effects are mediated in many ways, although one particular aspect of morphology that continuously incorporates a mesmerizing effect is through transformation and mutation of forms over a predetermined period of time. When one objects structurally alters itself to another representation, it instigates curiosity in visual perception and the phenomenology of understanding. If this mutational ways of animation translates and represent in tangible objects, how can it give the user the affordance as an interface? While operating this shape changing device, the user will require not only using their sight, but also touch and motor coordination. What type of algorithmic system within these mutational devices can promote understanding, discovery in conjunction to operate as an interactive physical user interface?

Historical / Context
Hyperreality comes from layers of creative imagination. A humanistic desire for a different world. A world in which varies by experiential interpretation creatively expressed in a system of simulated experiences. The reality are mediated to an extent that can no longer be distinguished from fantasy. Take Disneyland for example, as Jean Baudrillard puts it, “An imaginary effect concealing that reality no more exists outside than inside the bounds of the artificial perimeter”. (Baudrillard,1). Baudrillard defines three orders of simulation. First would be where the representation of the real world is artificially represented in different mediums such as books, painting, map, etc. The second-order simulation blurs the boundaries between reality and representation. The third order describes the simulation and it’s surroundings which he calls hyperreality. It is produced algorithmically like the computer programing code to construct virtual reality or augmented experiences. Baudrillard believes that hyperreality will dominate the way of experiencing and understanding the world we live in. (Lane, 2).
Visual perception isn’t a passive act, rather it’s an active learning experience. Human learns to see and understand the data perceived through active body motor movement to understand the world around us. It involves the integration of multi-sensorial processing ranging from our senses and our muscles to coordinate with the neurological brain function. It is a fundamental understanding in which animation needs to be constantly moving to feel a continuous flow, even if it’s a static frame, the object or text will still be drifting in slow motion. Therefore, translating these techniques into tactile devices will also require movement for a better understanding and organic spacial relationship between the object and the person.
Many motion graphics and visual effect techniques originated from theater and stage art. Some example reference such as artificial weight goes as far back as Pantomime performances from the 1700s. As entertainment advances with technology, so does the effects to enhance visual perception.
Tactility isn’t only about touching from the tip of the fingers, crafters such as sculptors and pottery artists requires the use every aspect of their hands, from the palm, nails, multiple fingers, to the back of their hands or even other parts of their limbs and torso. By using sensitive touches and feel in conjunction with a keen observation of form, to carefully perfect the craft.
Many hand made craft working skills prior to the Industrial Revolution has been lost or eliminated. Many of these detailed and elaborate hand-made skills makes objects and architecture more humanistic. Every duplication is different with it’s unique imprints left by the maker, even with decades of experience, every hand-made craft still have it’s variations. It makes each item a unique one-of-a-kind. Since the invention of the assembly line manufacturing process, every mass produced item looks identical. Sure it lowers the cost to purchase these items, but each item is cold and lifeless.
After we moved into the digital era, there is a major dehumanization with all the devices and tools. Everything is mass produced to give a digital affordance. Limiting our senses to perceiving information on a flat screen. Most tactility within the interface in interaction revolves around only the tip of our fingers. Keyboards and mouse has made humans as an extension of the machine itself, making us as part of the digital system to keep the world operational.
In the next phase of inventions will be a merge between the tactility of craft and digital affordances. Every control on the digital touch screen can be re-designed digitally with tangible objects without a screen.
Tangible user interfaces provides affordances to digital information while facilitates the advantages of human’s capability to grasp and manipulate physical objects in the real world. (Ishii, 3). In an environment full of activated devices which implies rich contents and interactions that are embedded in tangible things and inhabitable spaces will assemble a new symbiotic system with ongoing relationship human and their environment. This new ecology of things provides an evolving system that can be interpret and influenced by the interactions and decisions by people or other objects. (Allen, 4).

Process
My goal through these experiments are to translate the notion of motion graphic and visual effects animation and embed these forms and movements into physical objects, devices, and environments. These project explorations are initiated through a matrix chart that I have put together. One side of the grid being my ingredients and definition of every visual effects language applied. The other side of the grid includes eye tracking, other sensorial input sensors (breath, heat, proximity, force, bend, touch), gestalt/cognitive psychology, and neurology tests/theories. By infusing the cross section of two specific directions, the result provide informations which allows me to further develop innovative ideas to promote new functions for various human senses. To retain the value of these devices, they will be speculative tools containing the ideas of manipulation, interpretation, discovery, and stimulating imagination.
During my investigation, many contextual information comes from studying neurology and psychology of human perception. Not only does the human brain interprets human sensorial inputs through interactions with it’s environment, but it is constantly creating gestalts from incomplete information in attempt to make connections and logical sense of the abstraction. (Ramachandran, 3) This information can derive from the foveating vision, far periphery of vision, touch and feel, or any senses of the human body. Understanding the way our eyes and brain works as well as the phenomenological comprehension has greatly impacted my decisions to create devices that puts a twist to the human perception.

Prototypes
Physically Animated Motion Graphics Immersive Space is the first experiment during this investigational process. If the text and graphic elements have volume which occupies physical space, how can these elements give a similar experience inspired by television commercial and film title sequence? My approach in this investigation utilizes several physical graphic elements connected to ropes driven by multiple pulleys. This installation is presented in a dark space with illuminated graphic elements. By sitting or standing in a designated position, the viewer can experience a pre-scripted animation as these graphic elements move into the periphery in sequence.

Saccade Controlled Visual Angle, there is two versions to this project. Both controlled and view through an eye-tracking glasses that includes a hacked PS3 infrared webcam for the left eye and a LCD monitor mounted for the right eye. The LCD monitor is mounted on a bracket driven by two feather weight servos to allow a two axis rotation to follow the focus of the pupil. The main task of the eye-tracking is to control another physical device that holds a viewing camera for the LCD screen. The biggest difference between these two version is how the viewing camera was mounted.
The first iteration of this experiment is called “Extended Sight”, where the viewing camera is mounted on a small robotic bracket driven by two standard size servos to allow x and y axis rotations for the camera to look around the outside world. If the human eye sight can be physically moved to another location, what perspective can the viewer perceive?
The second version takes on the metaphor of jump cut techniques in film editing to see different angles. When a person gaze at something or someone, naturally the eye focus would constantly jump back and forth, scanning the contours and following while identifying various information of the subject. These quick eye movements are call Saccades. Although our brain interprets these saccades to be seamless, but biologically we are blind between these saccade points. The glasses mount eye tracking camera controls two semicircular acrylic armature driven by two servos that rotate in x and y axis. The main purpose of this structure is to allow the viewing camera to rotate 180 degrees around an object that can be placed on a central platform. Viewing the object in various angles. Due to the scale constraints, he maximum size that this specific prototype will allow to revolve around is 6in x 8.5in.

Super Hero Gaze Telekinesis takes on the idea of visual effects in film when the super hero has the power to control and manipulate things with their eyes. The controlling interface of this experiment has it’s similarities with the Saccade Controlled Visual Angle project. It also utilizes an eye-tracking glasses to control a flash light driven by two axis servo motors. As the user look in different directions, the flash light points at the direction the viewer is looking at. The purpose of this setup is to open up a physical box while looking at it. The box is constructed with a light sensor on the top center. As the light triggers the photocell, a servo underneath the box will pull on four strings that are attached to all four sides of the box and pulls the four sides like flaps folding downwards in a synchronized motion. This experiment investigates using the gaze of the human eye to trigger physical transformation of objects in the real world. What if everything in the world can be activated and controlled by just looking at it?

Physical Mutation Interface is an interactive morphing interface inspired by the ideas of organic mutation, morphing, and Cognitive visual angle. It is a speculative experiment that maybe suggestive for an interactive physical interface that can be implemented for consumer electronics. What if the form, function and interface can be changed by a simple gesture of wrist rotation? This stand-alone device is driven by a 3-axis accelerometer, controlled by an Arduino board, which triggers three servo motors, one for each axis of rotation. The servo arms are attached with strong skeletal armatures that rotates in their own independent directions without getting in the way of each other. This device is then wrapped in cloth to hide the mechanics within to produce a morphing look and feel from one shape to another, creating multiple and interchangeable surfaces. This interface provides new modes of usage as the panel changes and constrain functions to each display.

Reflections
The experiments I have created so far investigates two major directions. One is a perceptual expatiation of the eye, allowing control and manipulating the user’s own perception. The other is creating mutating physical interfaces, allowing the user to manipulate and influence the form and the outcomes of the device. Both has their technological limitations and the interactions are quite simple and on the surface level.
Due to the use of a motion tracker for the eye-tracking device, the location of the iris isn’t always accurate because it requires movement for the tracker to detect the location of the pupil. This creates a problem which the tracker only works a little more than half of the time while the user is trying to control the device. If I were to further develop the possibilities of eye tracking to manipulate vision, I will look into a better software for eye tracking.
The physical mutation interface is still in very rough initial stage. The interactions are very straight forward, only allows for one or two manipulation of it’s form and yet the interactions and affordances on the human receptive side is not complex enough to create productive and variety personal interpretations. The importance of the physical mutation interface is not the result and outcome but more about the process of the interactions.

Plans for Spring Term
Over the Christmas break and next semester, I plan to further investigate the idea of mutating/transforming physical objects as interfaces. How will these operate in a more in-depth matter? Giving complexity to the interactions and it’s algorithms to promote a productive interaction that provides various results among different users and create personalized interpretations. Not only investigate the interactions of these mutating interfaces between object and human, but also between objects and objects. How will an ecology of mutational objects communicate and/or influence each other?

Bibliography
1. Baudrillard, Jean, Simulacra and Simulation (The Body, In Theory: Histories of Cultural Materialism), University of Michigan Press, February 15, 1995.

2. Lane, Richard J., Jean Baudrillard, Routledge; 2nd edition, January 16, 2009.

3. Ramachandran, M.D., Blakeslee, Sandra, Phantoms In The Brain, Harper Perennial, 1999.

4. Ishii, Hiroshi, Tangible Bits: Beyond Pixels, Tangible Media Group at MIT Media Laboratory, February 18-20 2008.

5. Allen, Philip V., The New Ecology of Things, Media Design Program at Art Center College of Design; Limited edition, April 16, 2007.

Abstract V.05

Perceptual Exaptations

How can we manipulate/exploit human sensorial perceptions? The adaptations and exaptations of perception through evolution and the rapid increase in technological developments has greatly expanded long since human learned to apply their senses in a primitive way. If the interaction is intuitive, what type of new functions and controls can be introduced to existing senses? The main tasks of vision is to navigate and identify but a good example of an expatiation of the human eye function is communicating emotions through the scaling of the iris. Vision is important because it directly informs us about our environments with objects and spaces, but sometimes alternative ways to retrieve this information are researched and developed for people with various disabilities. My thesis explorations investigates and promotes new functions of our perception through the ideas of “sensory fusion”, “sensory substitution”, and “sensory hijack”. Many of the projects dealing with sensory substitution are designed for the blind and people with disabilities, when these new affordances are applied to a fully functional person, how can these activities inform new interactions embed the notion of discovery and stimulating imagination? The contents of my experiments are inspired by neurology, cognition, gestalt psychology, with apprehensions through phenomenological and biological factors. The vehicle of communicating these hybrids of ideas are executed through physical animation through tangible parts driven by motors and servos to move electronic devices which are inspired by structural language and techniques used in visual effects and motion graphics. My main focus is around exploiting visual perception, whether it is the understanding of seeing or the physical act of looking, but I am also interested in exploring other sensorial perceptions such as touch, feel, breath, sound. Visual perception isn’t a passive act, rather it’s an active learning experience. Human learns to see and understand the data perceived through active body movement to understand the world around us. My project explorations are initiated through a matrix chart that i have put together. One side of the grid being my ingredients and definition of every visual effects language applied. The other side of the grid includes eye tracking, other sensorial input sensors (breath, heat, proximity, force, bend, touch), gestalt/cognitive psychology, and neurology tests/theories. My investigations will provide informations which will allow me to further develop innovative ideas to promote new functions for existing senses. To retain the value of these devices, they will be speculative tools containing the ideas of manipulation, interpretation, discovery, and stimulating imagination.
My obsession began from studying motion graphics in my undergraduate program at Art Center with a handful of years working in the industry prior to returning to grad school. Through this experience, I have developed an interest in the way visual perception works. With the interest in gestalt and cognitive psychology, I have began to study the neurology of visual perception as well as disorders such as synthesis and various blindness (color, face, object, motion). My hobby while growing up however, revolves around auto mechanics, and craft works using materials such as metal, wood and plastic. With the hybrid of my professional background of motion graphics and my hobby in mechanics, my interests have emerged into building tangible interface, apparatus, and installations inspired by the techniques and languages of visual effects.

Mutation Interface: Physical Interactive Form Follows Function

Physical Interactive Form Follows Function

Keywords: Organic Mutation, Morphing, Interactive Physical Interface, Cognitive Visual Angle, Form Follows Function Physically

Question: What if the form, function and interface can be changed by a simple gesture of wrist rotation?

Summary: Mutation Interface is an interactive morphing interface inspired by the ideas of organic mutation, morphing, and Cognitive visual angle. It is a speculative experiment that maybe suggestive for a interactive physical interface that can be implemented for consumer electronics. This stand-alone device is driven by a 3-axis accelerometer, controlled by an Arduino board, which triggers three servo motors, one for each axis of rotation. The servo arms are attached with strong skeletal armatures that rotates in all directions without getting in the way of each other. This device is then wrapped in cloth to hide the mechanics within to produce a morphing look and feel from one shape to another, creating multiple and interchangeable surfaces.
 

this video demonstrates the way this device moves with the 3 axis acceleronmeter
 


begin prototyping and making it work
 


cleaned up the wires
 

 

 

 

 


Extended Sight + PAMGIS

Extended Sight + PAMGIS
(Physical Animated Motion Graphics Immersive Space)

Keywords: Immersive Physical Animation Data Visualizer, Saccade, Jump cut editing, viewing from different angles, Eye tracking camera control.

Question: What if the human eye sight can be relocated to another part of the body or another part of the world?

There is two parts to this project. Extended Sight explores the idea of moving a person’s eye sight to another location. This can be another location on the body as part of another limb or any other body part, or remotely at another location. This provokes the question if our eye sight is removed from our head and placed somewhere else, can we see from different perspectives? This project can also inform ideas for story telling. If there are multiple amounts of these eye controlled cameras are placed in different countries to tell different narratives happening at the same time, how can the footage and data collected inform a new way of story telling?

PAMGIS (Physical Animated Motion Graphics Immersive Space)
By placing this camera inside an enclosed box with physical graphic and informational elements strategically positioned, this immersive physical data visualization installation designed to recreate the experience of watching a motion graphics animation in a physical space. From a first person point of view in conjunction with eye tracking controlled camera placed in this space, the viewer is given the affordance to navigate through the text and graphical information freely. The space is designed to reverse engineer the way animation is produced in a scripted environment. As the viewer pans around the camera and bringing graphic elements into frame, in the viewfinder the look of the animation will be inspired by the way motion graphics feel as if they are animated on screen.

 

 

 

 

 

 


 

 

TRAILS

TRAILS: Super Hero Power Telekinesis

Keywords:
Synchronization, Transformation, Visual Search Selection, Visuospatial function

Summary: This experiment explores the idea of Visuospatial function, Object Transformation, Multiplicity Synchronization, and Visual Scanning. Utilizing the eyetracking technology as a control to analyze eye movement, which the scanning movement triggers the opening of a box. The box is inspired by transforming mechanical parts. This single function box is then duplicated in a mass amount organized as a grid format. As the result, the boxes open in synchronized motion as the eye pan across and looks around.

Question Raised: When a physical transformation in mass quantities is beautifully synchronized to create a patterned trail, what can of new functions and feelings can it inform other than a mesmerizing effect?

 

 

This experiment began from using eye tracking and stage use spot light as a metaphore to create an eye-controlled car headlight. At this early stage, I was experimenting while creating something practical rather than speculative and provocative. Here’s a video to demo how the car headlight works to help amplify the clarity of the driver’s view.
 

 

 

 

 
Super Hero Power Telekinesis
Taking the idea further and add multiplication of the transformation box idea and eye tracking controlled spot light. The visual effects metaphore this idea is inspired by comes from super heros from comics or movies often have some special power that allows them to manipulate an object with their mind, eyes, or other sensory input. Put that in conjunction with the visual effects language of multiplicity and synchronization, I put together this after effects animation to demo how the installation would work as your eyes move across the peripheral view.
 

 

 

MDP Science Fair Oct 31, 2011

Great successful day at Art Center MDP’s annual science fair. A great variety of topics ranging from sound, touch, urban, language, memory, gaming, gesture, and mine on visual perception. My current working title is “Perceptual Exaptations”.

The three experiments I displayed starting from the right consists of:

Trails – Transformation of the physical world with your gaze.

The middle is Mutational Physical Interface, an accelerometer driven form changing physical interface.

On the very left is Extended Sight + Physical Animated Motion Graphic Immersive Space. The glasses consists of a eyetracker (hacked PS3 IR web cam) and a LCD driven by two servos for the right eye. The camera controlled by the glasses is placed inside a box decorated with dimensional graphics, scripted to allow the information look as if they animate onto screen while the camera pans around.

Week 5: Hacking the PlayStation 3 EyeToy for Eye Tracking

The first time I hacked a PS3 Eye Toy to use as an eye tracker was a year ago, during my concept year here in MDP for a class call New Ecology of Things (NET) taught by Phil Van Allen. The warmup project at the time was called “Useless Networks” where I created a clever but useless helmet called “The 11th Finger“. I learned how to create this device by following the tutorials for creating the Eyewriter. Thanks to the smart people over at the Graffiti Research Lab for their detailed tutorial on their website as well as on Instructables.

 

 

 

To take apart the case, you need to dig in with a flat head screwdriver and pry it open while twisting

 

 

 

 

 

 

 

Week 4: Prototyping / Learning through making

These are the first iterations of my experimentation and implimenting sketches from the previous week
Tangible Motion Graphics – Idea around Physical Animated Spacial Experience

 

Through these scale models and experiments, this investigation begins to speculate the distance and perceivable range around the viewer. When this is experienced in the dark, the manipulation to play with fading is very useful. If I continue this investigation, these graphic elements that takes up space will be motorized.

 

Transformation
There are countless examples of visual effects in film and even in our daily lives that revolves around the idea of Transformation. Yes, that includes Hollywood hit The Transformers franchise, along with many other science fiction films. This is the first testing of a self reveal opening. Imagine if this container is duplicated on all 6 sides of a cube. Where this cube opens can be independent interfaces that are utilized for different tasks.

 

Mutation – Cube Duplication

This rough initial prototype investigates the possibility of hidden compartments or flaps to revel additional mass to allow for physical duplication or multiplicity. Visual effects often allow almost the impossible of virtual mass expansion. For example the transportaion (flying cars) in the cartoon The Jetsons, cars would fold up and become a suitcase. Or even in the japanese comic/anime Dragonball Z, the characters can click on a pill shaped device and throw it onto the ground…then poof! it turns into a motorcycle, private hotel, perserved food, or other large devices.

 

Synchronization – Mechanical Iris study

The idea of synchronization in visual effects or animation always has an mesmerizing effect on the viewer. In this initial experiment as part of my study of simply moving many pieces of shapes at the same time to morph or expose. This can be embed into container openings or duplication of many of these containers to create a physical interface for access and various applications.

Literature Review Rough Draft

Literature Review Rough Draft

How far can we push the human sensorial perceptions? Visual perception for example, is an active gesture rather than a passive one. In order to understand what we see, our body relies on sensory motor (body movement) and other sensorial inputs including phenomenological interpretations to draw conclusion of information perceived.(1) Cognitive psychology is a basis for many HCI developments. Although many HCI are designed around functionality and less on visual perception, rather more focused on the ability to understand and control through cognition. Motion Graphics and Visual Effects animation are executed based on the same motor sensing theory, except it is restricted to the spacial dimensions within the screen.

Visual effects and Motion Graphics are based on the idea of active view through movement, therefore, introducing the idea of a slow multi-dimensional drift to reinforce the understanding while the audience absorbs the text and graphical information. Through movement, we also get a spacial understanding of the scripted environment. Therefore, even though screen-based visual effects and animation is displayed on a flat screen, the space within is portrayed with the possibility of a greater or infinite dimensions. There are not many physical interaction projects based on the visual effects language, but any existing projects can be interpreted in relations with some aspects of the VFX techniques and philosophies.

Spacial recognition through sensory motor is crucial to our perceptual understanding. In the 1960s, Richard Held and Alan Hein exercised a famous study about “Movement-produced stimulation in the development of visually guided behavior”. The experiment placed two kittens on either sides of a device that looks like a classic weight scale. One kitten was allowed to walk with protruding limbs, the other is held fully suspended without the affordance of mobility. The kitten that is actively exploring the environment have normal visual system compared to the kitten with only passive visual stimulation. (2)

Many researchers dealing with perceptual technologies develop ideas geared towards people with disabilities, lacking one or more sensory input to the brain. Therefore, not too many of these developments are geared to art and expression mainly due to budget, profit, and marketing aspect. In that respect I can understand but many of these researches can be very rich if applied to new ways of interaction with devices or tools for the everyday use.

Through technology, one sense can be replaced with another. In the psychology world, this is called Sensory substitution, which was coined by Paul-Y-Rita in 1969. (3) His experiment using a camera that turns pixilated image into braille ,a interpretation of dark and light pixel contrast, protruding onto the back using the sense of touch and feel to understand imagery. Another example around this idea is a project that uses similar idea except it impliments the transfer of pixelated images to a sensation on your tongue using small electricity emitters layed on a grid and placed across the top surface of one of the most sensitive location on our body.
Sensory Hijack is about stealing from one sense to apply to another. It is similar to sensory substitution, but with slight obvious differences. Couple examples I like to bring in for this example is “Finger Sight” (4) which uses sound waves or reflected laser beams to give haptic feedback on the side of fingers, allowing the blind to navigate. Another example is also developed for the blind, called “The vOIce” (5) which is a device and system that allows the blind to see with sound. The design of the pitch triggers the neurological sensors in the brain that are designed for sight. Another term to explain this phenomenon is “Artificial Synethesia”.

Taking particle animation infused with Gestalt psychology’s theory of “emergence”, and execute the idea in the physical interactive world is the BMW kinetic sculpture by ART+COM. (6) This installation brings animated and synchronized particles to the physical world. Allowing the viewer to experience the effect in a physical space while providing the ability to view from various angles. This is one of the inspirations which brings me to the interest of translating the visual effects language to the real world rather than on screen and the only from the point of view designated by the creators.

Optical illusions are a way to inform us about the way human perception works. Through an interview on TED on a neurologist/artist name Beau Lotto.(7) He works with Perception-bending projects. Some of which are installation art, and some are smart phone apps, as well as object based speculative projects. Many of which Lotto describes as synethetic experiences.

Through these examples, and more out in the world, one can see the vast variety of implementations through manipulating perception with technology. Many of which used for medical feel like there is a large gap between these technologies and HCI. Although once a similar “artificial synethesia” is implimented into a speculative project for the everyday user or an communicative art installation, it becomes a impressive experience. The way humans perceive isn’t a one to one interaction. It involves motor sensory as well as cognition to comprehend the message being communicated through these speculative objects or art installations. Human interactions should not be flat. If it is required to be on a flat surface such as a digital reader or personal computer, the interface should introduce some dimensionality for the audience to fully engage. The ultimate solution is giving a dimensional affordances to every possible Human-Computer Interaction technology.

Bibliography

1. Noë, Alva. Action in Perception. Cambridge, MA: MIT, 2004. Print.

2. R. Held and A. Hein, “Movement-produced stimulation in the development of visually guided behavior.” Journal of Comparative and Physiological Psychology 56(5): 872-876

3. Bach-y Rita P, Collins CC, Saunders F, White B, Scadden L.(1969). “Vision substitution by tactile image projection.”. Nature, 221:963–964.

4. Finger Sight
Stetten, G., Klatzky, R., Nichol, B., Galeotti, J., Rockot, K., Zawrotny, K.,
Weiser, D., Sendgikoski, N., Horvath, S., Horvath, S., 2007. Fingersight: Fingertip visual haptic sensing and control. In: Haptic, Audio and Visual Environments and Games, 2007. HAVE 2007. IEEE International Workshop on. pp. 80–83.
URL HYPERLINK “http://dx.doi.org/10.1109/HAVE.2007.4371592″ http://dx.doi.org/10.1109/HAVE.2007.4371592

5. The vOICe. Synthetic vision through auditory video representations.

http://www.seeingwithsound.com/

6. ART + COM, Kinetic Skupture. BMW Museum Munich, 2008

http://www.artcom.de/

7. Beau Lotto (Neuroscientist and artist) – Perception-bending
HYPERLINK “http://blog.ted.com/2009/10/08/beau_q_and_a/” http://blog.ted.com/2009/10/08/beau_q_and_a/