An experiment in data translation.
Data is pulled from the Twitter API and converted through different mediums - First as text in the form of a tweet, then as a ratio determined by the number of consonants to vowels , then as both sound (c : v as frequency relationship) and laser light blinking in morse code. Finally it is captured as a Lissajous figure on paper using photo-chemical processes in a dark room
Step1: After signing up for a Twitter Developer account and obtaining the required access tokens, Processing is employed to access the Twitter API.
Step 2: Max and Processing communicate together using the OSC function. Max can now be used to query Twitter through Processing and the results are returned to Max.
Step 3: Once returned the Tweet is translated into Morse code and simultaneously, we are given consonants and vowel values.
Step 4: The Morse code is output through the serial port as on/of (0/1) messages and sent to an Arduino with a laser module attached. The Morse code is output as blinks of light through the laser beam.
Step 5: The consonant to vowel relationship gives us a ratio which is used to give the frequencies of two sine tone oscillators. These two frequencies are output as sound through a speaker.
Step 6: The speaker is covered with a membrane that has a small mirror at its center covering the cone. The laser beam is then aimed directly at the mirror and the light is bounced of it and onto the adjacent wall.
Step 7: The sound vibrates the mirror attached to the membrane covering it and the laser projects the vibrations on a larger visible scale on the adjacent wall.
Step 8: All light is blocked out of the room and on the opposite wall, where the laser is aimed, is hung light sensitive paper that is used to capture the laser's movement over a duration of time determined by the length of the tweet.
Step 9: The light-sensitive paper is removed from the wall and placed in the developer solution, then the stop bath and finally the finisher, to reveal path of the laser beam over time.