17 January 2008
The Media Lab had an event for our corporate sponsors in Tokyo, and we thought it would be a good opportunity for us to demonstrate how sensing technologies afford real time feedback on behavior, specifically one’s social network. 70 participants (60 from the sponsor companies, 10 from the Media Lab) wore the Sociometric badges during the event, which lasted all day on January 17. One third of the participants from each company wore the badges, although when only one person from a company came they got a badge.
The badges recorded which badges were recognized over IR (corresponding to a face-to-face interaction) and then transmitted this information over a 2.4 GHz radio through intermediate basestations to a badge attached to a computer through USB. That badge then sent the information over USB to a database, which was read out by a social network visualization program (a modification of the GUESS system developed by Eytan Adar). This visualization program pulled interaction data from the database and then added edges to the social network diagram if a new interaction was detected, while at the same time modifying the layout using popular layout algorithms. The visualization itself was projected onto a large screen in the break/lunch room.
Naturally, all of this was done in real time, with a very small delay from an interaction being detected to it being rendered on the screen. It was really fantastic to see the data rendered that quickly, and crowds of people were gathering around the screen (and in some unfortunate cases blocking the projector) to see where they were on the visualization and how many people they had met. I had many people come to me throughout the day exclaiming how the visualization “Inspired [them] to network more and gave [them] a great appreciation for the value of an event like this.” Unique numbers, not names, were displayed on the visualization, so only the participant could identify themselves. Still, I noticed people pointing each other’s nodes out to colleagues and almost “keeping score.” Participants would check the visualization, go around and meet with some other people, and then check again, comparing themselves with colleagues. It was all great fun.
Initially I had assumed that each company would form its own small cluster, with perhaps a few links interspersed between the groups. You can see a screenshot of the SN diagram before lunch, after lunch, and after the last break (except for these breaks, all of the time was spent listening to lectures from Media Lab students and faculty). I’ll add pictures of the actual projected display and the set up as soon as I get them.
EDIT: Here's a picture of the visualization at the event:
Above: Visualization being discussed by myself and Schlumberger managers
Above: SN Diagram before lunch
Above: SN Diagram after lunch
Above: SN Diagram after the last break
From almost the very beginning there was one giant component with a strong core-periphery structure, although the density of the component increased over the course of the event. It appears that there were two factors that led to this structure:
1. Media Lab participants, who all spoke to each other and spoke with many sponsor companies
2. Research affiliates: members of sponsor companies who had also worked at the Media Lab as visiting researchers. These participants knew other research affiliates who had been at the Media Lab from different companies at the same time as well as the Media Lab participants.
The research affiliates also ended up introducing participants to one another, and I believe demonstrated extremely the kind of social capital that is generated through such an exchange program. In fact, Prof. Hiroshi Ishii, who organized the event, felt that this visualization could be presented to potential and current sponsors as an additional way to show the value of Media Lab sponsorship.
We are also going to analyze the data collected with the Sociometric badges to see if we can predict company affiliation, recognize research affiliates, and so on. We will also incorporate additional information into the visualization. I believe that this visualization was a success because of its simplicity, but if we add information such as accelerometer, speech, and proximity data, then participants may gain an even better understanding of what’s happening in their environment, as well as how they can interact with it.
Posted by Ben Waber at January 17, 2008 10:29 PM