[Reader-list] Human-Computer Oscillation
Harwood
Harwood at scotoma.org
Tue May 29 07:07:28 IST 2001
I hope this is useful as a bridge between the Programmers and the
other elites at Sarai
Human-Computer Oscillation and the need for calories.
The foundation of this report is the human sensing-system's
contribution to the physical geography of the body: the orientation
of the body in space, an awareness of spatial relationships and an
appreciation of the specific qualities of different places and
different things, both currently experienced and removed in time. The
body's sensing systems offer important media through which space and
time are experienced and made sense of in computer interface. Yet
also, each sense system seems to offer its own distinctive character
to that experience and physical geography in general. In particular
contexts, a certain sense and specific style of operation of that
sensing may play a dominant role in establishing geographical meaning
and the meanings of particles of data from computer systems. Thus the
multisensual nature of geographical experience is not even within
interface but variable across space and time, between individuals and
communities, between cultures and periods of time. Physical geography
is always changing, and any general characteristics recognized are
always specific to the need for calories and complex local economic
and social systems arising from that need.
The human user's sensing systems express themselves within the brain
though complex states of fluctuating voltage happening in conjunction
with chemical processes. This electro-chemical complexity around the
user's synapses eventually formulates itself into a symbolic-world.
The structure and nature of this symbolic-world does not concern this
report. It will suffice to say that one exists.
The first challenge of Interaction with an operating computer is to
funnel this symbolic world of the user into physical actions that
force locator devices from one state to another. The velocity and
frequency of this force can then be monitored or measured by setting
up constant states of voltage in the locator devices concerned. The
fluctuations of voltage can then be used within logic gates. E.g.
Keyboard. (Thus when the letter "K" is pressed down on the keyboard
through the physical force of the user the voltage goes high (usually
more then 2.4v in TTL or CMOS logic) and when released the voltage
goes low (less then 2.4v). This information is then logged in a
register within the host machine).
This state of voltage is then available through the selectivity of
software (interface) as a Basic Interaction Task (BIT). The computer
may be preprogrammed to display this BIT on a monitor (cathode ray
tube) by deflecting the path of three electron beams (Red, Green,
Blue) by means of electro-magnetic coils. The path of these electrons
is accelerated toward the phosphor-coated screen made of heated
silicone (glass) by means of a high positive charge applied near the
face of the tube. Typically the charge needed is 15,000 to 20,000
volts. Eventually the electrons hit a specific grouping of phosphors
transferring their kinetic energy into the phosphors atoms, making
them jump to a higher quantum-energy level. In returning to their
previous quantum levels, these excited electrons give up their extra
energy in the form of light - usually aimed at the user's visual
system. The user's visual system uses photo-receptors situated in the
Ocular mechanism (eyes, with intrinsic and extrinsic eye muscles, as
related to the vestibular organs, the head and the whole body). This
system explores and finds convergence in the form and colour of its
targets and the spaces in between them, thus converting the variables
of the structure of ambient light back into complex states of
fluctuating voltage located around the user's synapses and thus back
into the symbolic world of the user.
This example represents here the oscillation between the user and the
machine's complex states of fluctuating voltage. Having established a
reliable link or oscillation. We can now move on, pulling out of this
process a specific subject for further exploration.
Tasks
Tasks are a specific chore or duty to be done. They exist on either
side of our oscillation and are preprogrammed by the user's or
computer's environment. E.g. The computer has a chore or duty when
first initialized: electricity is resisted in certain ways so as to
make it pass down the roots of least resistance to measure other
voltages through capacitance in order to check that its environment
(working conditions) fit within its predefined order. The human unit,
needing calories in order to feed the energy demands of its sensing
systems, has the task of appropriating value that it can exchange for
food, processing this vegetative produce, digesting it and turning it
into complex states of fluctuating voltage.
So we have the human unit oscillating with the computer unit and both
preprogrammed with sets of tasks. The side of the equation that
interests us here, is how the computer recognizes a BIT in its
oscillation with the user.
With a BIT, the use of an interactive system enters a unit of
information that is meaningful in the context of the application. How
large or small is such a unit? For instance, does moving a device a
small distance enter a unit of information? Yes, if the new position
is put to some application purpose, such as repositioning an object.
No, if the repositioning is just one of a sequence of repositioning
as the user moves the cursor to place it at the top of a menu item.
Here, it is the menu item that is the unit of information.
The space between what is treated as a BIT and what is not allows for
the objectification of a user-task.
Objectification, within this text, represents our ability to see a
thing as different from ourselves. This in turn allows us to explore
and transform the thing at a spatial range from ourselves, seemingly
leading to a sense of ownership of the thing (not necessarily
individual ownership but ownership in general mediated through
present economic cultures). This separation of our selves and object
is achieved through a multitude of ritual practices, software
interface being just one.
So it can be said: the user's objectification of content in computer
interface relies on the selective translation of user force through
locator devices. (This selective translation is of course predefined
by the programmer's need for calories and the relative social and
cultural parameters in which this need arises.) Software (interface)
requires the recognition of BITs on the part of the user in order to
allow the user to objectify content within the interface.
Example locator device: mouse.
Locator devices are either absolute or relative. Absolute devices
such as a graphics tablet have a frame of reference, or origin and
report position relative to that point of origin. An absolute device
can be used to specify an arbitrary large change in position without
contact with the tabletop. This allows it the ability to transcribe
things that are already objectified into the computer (such as
tracing a plan or a drawing). Relative devices on the other hand such
as the mouse, trackballs and velocity-control joysticks have no
absolute origin and report only changes from former positions.
Relative devices cannot readily be used for transcribing real world
coordinates into the computer.
The Mouse
The mouse having no absolute origin is a relative device containing
two states at the top level (working and not-working) and two
parameters (movement and button-states),each containing one further
variable of (time-in-between-states).
E.g. Top-level-states-of-a-mouse (not-working [nothing], working
[variables-of-a-working-mouse ([movement, timing-of-movement],
[button-states, timing-in-between-button-states]])
The objectification of content within interface (a user-task) in a
mouse input can be carried out through the repression of both:
the timing-of-movement variable
and the timing-in-between-button-states variable
and filtering the button-states by mapping rectangles of interest
through the movement variable.
Expressed:
repeat while <movement> happening
If (<movement> within Mapped-Rectangle-of-Interest ) then
if(<button-states> = down)then
do-something-useful
end if
end if
end repeat
This allows the user to experience selection of content in the
interface outside of a continuous time-frame. The repression of the
time-based variables of the mouse allows the user to feel in charge
of a process and not incidental to one that is already happening. The
next step to objectification in an application interface is to only
record those movement events that are significant to the interface
and nothing else. This is done in a windowing environment by using
icons to represent files, applications and menus.
This objectification in interface allows the user to see software
structures as part of a fixed, environment that is external from
themselves. The production and exchange of value within capitalism
requires such a process takes place. The user's contact-time with an
application is made into something more definite, constant, or in
other words, an object. We then objectify it, i.e. replace the
process (the entire subjective and 'objective' range of the user's
contact-time with the application) by another object.
Objectification within this model is a kind of meta-system
transition. Normally in a meta-system transition we create a new
process which controls the old one(s). In objectification the new
process controls not the old ones, but the objects representing these
processes.
The most common form of objectification is definition. In interface
design for instance, algorithms are defined as computational
processes that we expect to be executed in a certain fixed manner by
the user.
Having established a reliable model of one specific aspect of
objectification in interface design the report reader may like to
consider the following questions.
Q: What are the consequences for the appropriation of value within
capitalist systems if we interfere with this objectification process
within interface design?
Q: Having established that the selective reading of the user's input
data through the mouse helps lead to objectification of content
within interface, what happens if we create software that acts on all
possible variables within mouse interaction?
Harwood at scotoma.org
Tel +31 (0) 20 365 9334
MONGREL
http://www.mongrelx.org
HARWOOD DE MONGREL TATE GALLERY SITE:
http://www.tate.org.uk/webart/mongrel/home/default.htm/
WASTE_WORDS THEIR WEIGHT& FREQUENCY IN LONDON'S MUNICIPAL RUBBISH
http://www.heise.de/tp/deutsch/kunst/waste/index.html
Linker site
http://www.Linker.org.uk
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/enriched
Size: 11475 bytes
Desc: not available
Url : http://mail.sarai.net/pipermail/reader-list/attachments/20010529/4d50bbd4/attachment.bin
More information about the reader-list
mailing list