In the first half of the day we learned how to connect our Circuit Playground device to others devices and send signals between them. We started by all connecting our devices together using P3 and mu-editor. Then everyone touched their device and all the other devices connected lit up as a result of the touch. Then we split into groups of 2 and only connected our two devices together and tested that out. It was really fun I am really enjoying using the Circuit Python.
Then in the second part of the day we met with our groups, created a group photo. My group decided instead of creating one photo for each place that we would piece together a bunch of photos to create a Frankenstein. The background of the image is our three schools collaged together and the foreground is an image of our body parts collaged together. We took each arm from a different person, each leg from a different person, hips from one person and a torso from another, then we added everyones head. Here is the result. Credit goes to Yingzhou.
The second half of the day was spent working on concepts for our project then presenting them. My group started by thinking of the range of different types of greetings then transitioned into the range of different types of emotions that are lacking in a video call. We discussed how you could never tell how anyone was feeling in a zoom call because people barely react to what is going on. When you are in person there are many signals to see how a person is feeling from facial expressions to their reactions to body language and you don’t really get much of that in zoom. People usually stare blank faced and they don’t talk because it requires the extra step of turning on your microphone. We want to create a way for people to gauge how others are feeling in a zoom call so we decided on using sensors to sense body movement during a call. The movement data would relate to emotions, like shaking your foot violently indicates impatience and other things along that line. These movements would be translated into colors varying from warm to cool colors depending on what the movement indicated. We would also use processing to create a shape that would change from soft to sharp depending on the mood the sensor is conveying.
After presenting this idea to everyone in the workshop we got some great feedback and we are planning on altering the idea a bit tomorrow to not only use visuals as an output.