Contributed by Esteban Sarthou, Diana Delgado and Mariana Souza, co-creators of Fuentes Alternas.
Fuentes Alternas is an artistic intervention of the public space. At a city square, eight loudspeakers are arranged in a circle and in each one passersby can hear music that aludes to a certain emotion and a voice. This voice is generated from messages sent through Facebook and Twitter.
These messages are classified with MonkeyLearn into 8 different emotions. Each emotion corresponds to one of the speakers. The passersby interaction arrises when it follows it circular path until finding their own message:
Squares and parks are places where people sit and chat, where children play, salesmen try to make their business or tourist stroll.
Social networks allow us to keep in touch at all times with no restrictions. This freedom of expression generates a compulsive invasion of what is private over what is public.
By concentrating on the boundary between these spaces, Fuentes Alternas represents the current role of social networks and how they influence the modern concept of public space. We recover the historical value of the squares, its original purpose of group meeting and cultural exchange. This interaction is what we compare to contemporary spaces of communication, thus the “traditional” public space is overlapped with the “virtual” public space.
We positioned social networks as new public spaces in which the passerby appropriates this space by expressing his or her thoughts, ideas, and moods.
Through this project, we seek to give voice to the messages and expressions of people in social networks as well as representing them according to their mood. It’s a tool built on social networks that seeks the interaction of individuals with the installation in the public space.
How does it work?
The centerpiece of this project is the circle of emotions. Therefore, the emotions are defined as if it were a color wheel:
We built a simple code using the Twitter API to grab the latest tweets with the hashtag #FuentesAlternas.
For Facebook, we built a code that grabs the latest comments on the posts of our Fanpage using the Facebook API.
The texts picked up from Facebook and Twitter needed to be classified into eight emotions. We created a text classifier using MonkeyLearn where every emotion was a category within the category tree.
Once we created our category tree, we focused on finding tweets that clearly corresponded to each emotion. We also made sure to include messages that contained chilean slang words.
After the training the Machine Learning model for the first time, we tested it and we found that the results were far from satisfying. Since some of the emotions had subtle differences in meaning (like fun and happiness) and the underlying complexity of classifying short texts (like tweets), we realized that more samples were needed for improving the model.
As we found it was real hard to find real tweets that represented each emotion, we decided to create additional training samples ourselves. With the help of local chilean students we wrote down a great number of everyday expressions and phrases for the text classifier. The results improved a lot with this additional data.
Using AppleScript, included in Mac OSX, we created a simple script that automated the task of creating .aiff audio files from each new text file. The NSSpeechSynthesizer was used for text-to-speech synthesis.
Max/MSP is an audio-oriented visual programming environment. It was used it to play the audio files joined by the ambient music composed for each emotion. Each of the eight signals was routed through the different channels of the audio interface.
During the three days the installation took place we observed various reactions from the public.
In most cases, the passersby were instantly attracted by the display and approached closer to the speakers. At first, they would play around guessing which emotion corresponded with the music they were listening to.
Only after getting a real feel of all the different music, they tried sending messages to our system. The messages sent were quite varied; a great number were friendly jokes between a group of spectators who were clearly familiar to each other. It was common to receive something like “Joe is a really cute” and right away hear a whole group of people laughing after the message was played.
Most people interacted with the exhibition as though it was a sort of intelligent toy and waited to hear laughs in every message they sent. One passerby truly surprised us by sending pieces of poetry. Poems are charged with various emotions, thus these sentences enriched profoundly the installation.
The classifier worked really well without great errors. Day after day we went through what the classifier had classified and incorporated words and samples so as to perfect even more the system. We did notice that sarcasm was somehow hard for the classifier to interpret correctly as well as certain sentences that referenced famous names.
The messages were repeated three times giving the user enough time to head off to the speaker it was played on and hear it. This provoked a continuous path of people strolling around the circle.
All in all, the goals expected were achieved. The passersby quickly understood by themselves how to trigger the installation and had a good time experimenting with different messages. Fuentes Alternas successfully connected the virtual public space with the physical public space.
Fuentes Alternas starts in 2013 as a project created by students of the Audiovisual Engineering Degree in the Universidad Católica of Uruguay.
In 2015, four of these students (Esteban Sarthou, Diana Delgado, Mariana Souza and Gonzalo Silvera), adapted the project to participate in the Bienal de Artes Mediales (media art biennale) in Santiago, Chile.