This is an interactive self portrait.
I am using them to create a light matrix. While I did think about just buying an already built flexible, neopixel matrix, I realized not only would it be more economical ($20 vs. $100) but also, I could learn more by creating it myself. Let the soldering begin!
I was given assistance by man of the “Shop Boys” with one even lending me a handing neopixel holder. Since this was my first time soldering, it went a lot slower than I expected. I also started by using a thin wire, thinking it would be easier to sew. However, it kept breaking off. So I decided that I just needed to get a testing of some neopixel to see if they worked and did what I wanted. This meant just some basic wire.
Voilà! The neopixels actually work. Well at least one works but I am still pretty proud of myself. I created three more of these so that I could try to make a matrix but had to stop due to my mom being in town. I was also given a copper breadboard last minute and am now thinking that it would be a better idea to just create a more stiff matrix using this board and getting the right images I want so I don’t sew it and then have to undo it if it’s wrong.
Also, this past week has been a great time to discuss my project with people not in the class. I explained how I wanted to make a dress or a sweater that will have different sensors to express different emotions like when you cross your arms, it will say X and having hands on your hips will be a lightening bolt. I discussed how I could either make them into images or I could make a scroll bars that would have words scrolling. I think for me, I want something that wasn’t completely direct and like the “fun” in images or emojis. This is why I will create a matrix that will have the Xs, lightening bolts, lips, but I may also add in a “Brr.” I am finding through my discussions that I want this to be seen as fun but also to bring to light what body language is. My hope is that I can add in enough sensors that the user can even explore their own body language and be surprised by it as well as their counterpart. I am still undecided on if it should be a dress. I plan to make a dress but I just don’t know if that form fits the idea. I am hoping to determine this in the next few days as well as making a bigger matrix.
This past week was Spring Break and since I had already had a trip planned, I wasn’t able to as much fabrication as I would have liked and the little that I did do broke. However, this turned out to be a blessing.
I started working on the idea of mirroring gloves. They were supposed to capture hand gestures. Then based on those movements, it would either encourage another user to match those or rate how the other user matches. However, during testing of the flex sensors in a glove, one of them broke. In the frustration, I started to think more about what I really wanted to get out of this class. Not only do I want to learn about wearables and what is their purpose, I also want a refresher on sewing and making soft sensors. Because of this, I have decided to once again alter my idea a bit once again.
I still want to explore body language but I am going to take a broader approach and not stick with just mirroring hand gestures. In class, someone suggested making something that would show different types of body language and this is where I plan to go. My idea is to make a shirt (although, I would like to make a dress) that based on where you place your arms, it will show a different output. I will use a mixture of contact buttons, flex sensors, and possibly accelerometers to monitor these movements. I think as of right now, it will be a visual output with multiple leds in a pattern for each movement but I also think there is something useful in sound.
I am much more excited about this and now have a plan for my outcome and even my documentation. Since I decided on this idea this afternoon, I don’t have much to show. I plan to make a fabric button tomorrow and I am currently looking at patterns for how to make the shirt. I am currently working on sketches for what it will be and filling out the past survey on what this will be about. I will add all of these later.
After all of the emotion and thought, I feel complete relief to finally see what I will be doing. I’m still nervous on how behind I am but I think being out of the web of uncertainty is helping to encourage my motivation to start.
Note: Also, I am very appreciative of everything that happened in class. Although, it was completely terrifying and I felt emotionally naked, I am glad to have had that experience. It was not only eye opening on how I need to get better at sharing my thoughts but also in how supportive people can be.
Since KC and I have decided to go our separate ways on the project, I have been trying to brainstorm new ideas. However, I have still found this to be a struggle as the projects seem so similar to others. I also am struggling with how much technology needs to play a part. I know that wearables aren’t necessarily defined by their technology but I am also wanting to use this as a learning opportunity. With these being my main issues, I have started brainstorming some possibilities.
As perviously discussed, I like the idea of using body language as a subject for this. I read through Center for Nonverbal Studies and am still processing as it was a lot of information. When I originally wanted to do body language, I wanted something that decipher every part but I now know that’s unrealistic. So I started thinking more about which parts I liked and found that I like the idea of eye contact. This brought me to the conceptualized idea of glasses that light up your eyes when you are focused on another person who is wearing a similar pair. In my head, the lighting of your eye would drown out other light allowing you to focus on the other persons eyes that are lit up. Obviously this wouldn’t be a commercial product but I find that to me it is an interesting idea to play with.
The other idea that came out of this research was the idea of waiting. I realized I have been thinking about waiting a lot but never really thought about what could be done with it until tonight. Waiting has such a profound effect on people that I would like to explore a project on this. However, I am unsure on what type of wearable could solve this.
Throughout this whole process there has been a lot of stress and because of this, I also am thinking about the idea of a stress relieving shirt. This could be explored by using different textures to grab onto or relax with. Almost like a bubble wrap for your body.
The next idea I came up with was reinventing the friendship necklaces. For this, it would be exploring what a friendship necklace is meant to be. I could do a basic technology of them lighting up when put together but I think I would want more than that. Perhaps sensing the other person’s presences or giving you sense of their mood (not sure how this would be possible.)
I know that these are all thoughts in different directions and I am scared about how behind I am on prototyping. I learned a lot from making the basic circuit with the project with KC but now with starting back at square one this week, it makes me scared of not finishing this project. I seem to have an issue with deciding on a project that I think is actually a good idea and second guess everything. I plan to present these in class for this week.
Note: This project is a collaboration with KC Lathrop.
Wearable Tech Project:
1. What is the purpose of the artifact you are designing?
The purpose is to encourage body movement by having pleasant musical feedback. If people are incentivized by music, will they move more? This artifact will help to recreate the perception of our bodies when it’s seen as a tool to create what we hear. Listening to music is typically a passive experience with dancing as a result of the music. We are going to explore how one’s body moves if the movement, or “dance” comes first.
2. Why does it exist?
Music is powerful in my ways. It can excite us, or calm us down. It even can help us remember, according to Michael Rossato-Bennett’s film, “Still Alive” that documents how music improves the lives of people with dementia. This would help music become more accessible, as it would essentially be apart of your body. Another important element of this object is that it would encourage people to move. With the evolution of technology, people are more sedentary than ever. According to James A. Levine, an obesity specialist at Mayo Clinic, “It’s the disease of our time. Any extended sitting can be harmful” (www.lifespanfitness.com).
3. What will it DO? (How will it do it – but you can only answer this if you are clear about the rest of the answers)
As a person moves it will play parts of a song, or notes from a song, enabling a person to interact and recreate music with their bodies. It will have flex sensors in the shoulders of the shirt, which will be mapped to different samples of a song or notes. Each movement will have a corresponding sound.
4. How does it work? Step-by-step – (you open a box, a drawer, you plug it in, you charge it, you press on a button to activate it or it is always on… etc)
1. Put on the shirt.
2. Connect your phone using App via BLE to activate connection
3. Move and listen for music feedback
5. Why would someone want to use it? What do you add to their life? Remember that value is shared, applied based on some sort of value system onto objects. So think about communication, and shared values.
Humans love music. It can be personalized to fit their emotions, taste, or activity. We love alone through our headphones, or at large events with amplified sound. When we listen to music we tend to move, but not everyone is comfortable with dancing or using their body to express their feelings. This wearable would combine an interactive experience with music, in which a person has control of what their hearing, the natural impulse to move, with the encouragement to use their body as a tool to personalize their musical experience. It can be a shared experience as well either with a performance or by two people creating a music with together.
6. What is your anchor?
The key idea is that we love music and need to move more. With this wearable the movement controls the music, rather than the other way. It gives people the chance to express how they want to interact with the music they hear through their bodies.
7. Describe in 1 paragraph your project.
We will be creating a shirt that has flex or bend sensors in the shoulders. In possible later iterations, we will have sensors in the elbows and wrists. The sensors will be mapped to sounds of music programmed in a mobile device via BLE. Our main demographic of users people interested in increasing their awareness of their body movements and encouraging those to move more.
Readings: In reading Wearable Electronics and Smart Textiles: A Critical Review, I found myself excited about my engineering background once again. I have always had a love for learning about new materials and this reminded me of that. Being able to understand how the textiles work is essential in understanding how to use them and although, I have never really worked with wearables, it helped in seeing possibilities for the future projects.
I really enjoyed reading Losing the Thread because it had a lot of information that I really never thought about. I never knew how much textiles played a role in our lives and society. It gave me an appreciation and gratitude towards them. I especially enjoyed learning about the metaphors and in the beginning about what people thought of future fashion. I would love to see this experiment done now for fun with the hope that there would be the same openness to the future.
Tracking of Garment: My favorite garment is a BCBG Max Azria dress that I bought about 8 years ago. It is made out of fabric blend that is 93% Rayon and 7% Spandex giving it this nice silky and cold to the touch feeling (which is one of my favorite parts of the dress.) The pattern is a black base with what almost looks like geometric shapes that are ripped from teal, cobalt, and mustard paper instead of straight cuts.
While inspecting the dress, I found it was made up of 10 parts: 2 sleeves, 2 cuffs, 2 tops, 1 top backing, 1 frontal skirt, 1 back skirt, and a belt. The fabric around the belt and the cuffs appear to be rolled in to make a nice edging.
At the shoulder and the waist, there is pelting as seen below that gives it a nice, flattering shape for my body.
Although I hardly get to wear this dress, it will always be my favorite and stay with me in my closet. I like that the silhouette is form flattering with almost a 1930s gown style to it but adds a modern twist with the pattern. It hits my waste nicely and flares out just enough so that it is not too tight. It also is long enough that I can wear it out to different types of occasions. I hope to find the pattern for this one day or possibly recreate this dress using the same type of material but in different colors and patterns.
My mood/inspiration board is split into three categories. The first (on the left had side) is a column of inspiration related to Biomimicry. This is a topic I studied in undergrad and would love to find a use for it within ITP. Not only is there new material coming out everyday that is inspired by nature but there are also processes and tools that come from Biology. Also, I like the aesthetics of this column including the patterns and colors. It’s a mix of harsh and soft beauty.
Jumping the middle column is a mix of textiles and patterns that I found. Some are e-textiles like the purple scarf dyed with soil bacteria. I am interested in working with different types of textiles and materials for this project.
Jumping to the right column, is inspiration related to social experiences with emphasis on taking mundane, everyday things and making them interesting. I am using Disney because no matter what people’s feelings are toward the company, I believe they excel in this category. The two examples I use are the queueing and there magic bands. In there queue areas, they are able to positively manipulate (or encourage) you to wait possibly hours for a few minutes of fun. They make waiting in line a whole experience for the person by adding games, or secret objects. They also create these check points that make you feel that you’re almooost there. While most amusement parks have long lines regardless, these details are what makes it a better experience.
In the second part, I have the Magic Bands. What I like about these is that they once again make boring things like checking in or paying for something an easy task. While this is a little more commercialized than I would like, it still shows an example of taking the mundane and making it fun. I hope to do that in this wearables project.
Currently, I do not have a clear idea in mind. I want to find a nice mix and have decided my next step will be to look at AskNature.org and other biomimicry websites for that part of it while sending out a survey to find out what are things that people have small pet peeves towards. I plan to focus on making everyday, boring things more interesting in a subtle way and will use the materials and science as a helper.
The Fort is an immersive experience with the intention of encouraging user connection. The idea comes from Skylar Jessen who I partnered with for this project. I assisted with creating the code to help make his idea come to life.
The Fort uses the CLM Trackr to take the facial expression of each user and use it as the base color for the bottom gradient. The top part of the gradient uses the volume of the two users’ vocal interaction. The bottom part uses individual’s happiness level for the cover. A third element was added for surprise. The surprise is attached to the opacity so it tints the image.
All of this will be projected on a fort to create an intimate experience and encourage the users to focus on one another rather than the technology.
SandBand started out as a simple project to encourage people to play like they did as a child. I wanted something that would be simple and not take much knowledge for someone to use. Sand seemed like the simple solution as it’s something most people have come in contact with. Although, SandBand ended up working with some tweaks still needed for the sounds, it was not the easiest journey.
During brainstorming, I came up with the concept fast and simply. The project was going to be a simple sandbox with some sensor telling when the user moved the sand.
However, the problems started soon after that. It’s something that a few of us like to call a “dark day” or in my case a “dark week.” It’s where every idea you currently have seems awful and then it can start slipping into any idea you have ever had. It makes you question everything and just feel lousy. During this time, I started to wonder what it was I really wanted to make and why I wanted to do it. I was about to scrap the whole idea but couldn’t get it out of my mind. I was given advice by lots of people but one of the best advice came from a friend who simply said to just do it to get it out of my head. Because SandBand became a mental block and I couldn’t move forward with any other ideas, I had to finish the project to see what it could be.
Another obstacle that happened during this time was combining this project with another project that used pine needles to create an image. At first, this seemed like a great idea since both of these used nature for the material interaction and we did the first play-testing as a combined project. During the testing, I found that most of the comments were about one project or another and not about it as a whole. This combined with my “dark days” made me second guess the collaboration. In the end, I decided it wouldn’t be fair to my other teammates to hold them back in their project as I decided on what to do with mine. While in retrospect I could see how the combining of the projects could have been great, I still think it worked out for both projects individually.
Once the decision to move on with the sandbox as is was made, I had to start to create the concept for how it would be complete. Originally, I wanted to create a box that had sensors the users wouldn’t have been able to see. In my mind, this would encourage the users to explore how the project works without just seeing how it works. By using weight sensors at the bottom of the box, I thought I could do this. The box would be separated into sections so that each section would have a separate sound and could independently be triggered.
I started by researching how to create force sensors. Through my research, I found that the weight sensors aren’t easy to create but I could use fsr sensors. I found many tutorials on how to make fsr sensors. However, I still found myself not wanting to build it and after meeting with T.K., he suggested to use the Kinect. I struggled with this idea at first since it seemed like it would take away from just playing with sand but he pointed out that to use the fsr the user would have to add and take away sand for it to work. If that wasn’t done, the user would never really be able to create all of the potential sound. By using the kinect, the user could just move the sand or build a castle without changing the amount of sand. This solved the obstacle of the sensor issue.
At this time, the second round of play-testing occurred. For it, I had set up a box with regular sand and then a box of kinetic sand. Bringing the kinetic sand seemed like a suitable replacement for wet sand since the idea was for the user to build sand castles but for play-testing it became a problem. Most people were so distracted by the sand, it was hard for them to focus on what it would do. To be fair, this was also due to SandBand not being set up fully. While at first it was frustrating to not hear my classmates opinion on the full project, I used this as a learning opportunity. It taught me that SandBand can be fun no matter if the music part works or doesn’t because in the end, most people just want to play with sand. Even later on when I made actual sandbox and filled it, I had a lot of random requests to just place their hands in it. Most people still wanted me to use kinetic sand, it was far too expensive for the amount I needed.
The other part of the play-testing was how to create the music. Most recommendations were to create a way so that the users could actually make music, which validated my original idea. However, as I started working with the Kinect, I learned it wasn’t as easy as I thought it might be.
The first issue was actually setting up the Kinect. During my research, I found the SARndbox from UC Davis. In this, they used the Kinect to create a projected image based on depth. I assumed I could adjust the coding from this to have music change instead but as I began to download the provided software and code, I kept running into dead ends. The biggest one was that the software only worked on past Mac OS versions. I decided that there might be a better way to do this than using the software I didn’t understand.
As it so happened, Daniel Shiffman had examples on using the Kinect with Processing. I began to use some of these examples as a starting off point and things became much easier. I could finally get the Kinect to give me information. It was at this point that I ran into another problem. My mind could not wrap around how to create grid and get the average depth of each of the cells and then connect those cells to specific sounds.
The first step was creating the grid. I met with Shiffman and he taught me to do this by creating four for loops. In doing this, I split the area into different sections with the first two for loops and then the second for loops analyzed the pixels within those sections. Once that was complete, I could get the average in the depth. I understood how to get the average of the depth as a whole from the example shown but to do it in the different sections didn’t make sense. At this point, I had another office hour with Moon to help me understand what to do. He explained that I could move the sum code into the for loop to get the average in each area. Once that was figured out it was easy to use each average depth’s to connect to the volume. There are some issues with the Processing Sound library. Depending on the songs I picked out, processing ended up crashing. After discussing this with a few of my classmates and Shiffman, I learned that this can just occur sometimes with the sound library.
At this point, I had a code that was somewhat functioning and needed to start focusing on building the sandbox. Through this processes I learned how to cut the wood, use the router, and measure cuts correctly. The box ended up taking a little bit longer than I thought but in the end I was happy with the results.
Once the seal was dried, I had people help bring the sand. While I thought it would take somewhere between 50 to 100 lbs, I ended up having to get 180 lbs of sand. Although sand is cheap, it is not the easiest to transport which can be a huge issue and I wish I thought about prior to starting this project. At least the effort was worth it and many people wanted to play with it or were intrigued right away.
At this point, I was ready to connect the Kinect with the sandbox. Although, it worked, it was exactly the resulted I wanted. Luckily one of my classmates, Aaron Montoya, is an expert in sound and helped me understand how to use it better for SandBand. He first showed me that I could move the loop function above the play function to loop the sounds through. He also showed me how volume is created in a exponential way rather than linear way and to code this as such. The final part was to use Audacity to easily shorten the music. Thanks to Aaron, Moon, Shiffman, T.K., I was able to create a functioning code (linked here) that allowed the depth to affect the volume of the sounds. For the final presentation, I used Aaron Montoya’s and Naoki Ishizuka’s music. In the end, I know I still have some tweaks for the show like changing the actual music but am still happy with the results of SandBand version 1.
Recently I went to the Ted Talk Live about Science and Wonder where Tierney Thys discussed how increasing our interaction with nature can improve our lives. This encouraged me to stick with the sandbox idea. There is something to be said for getting people to actually feel and move the sands with their hands and I like the tactile experience I hope to create with it.
The measurement of the sandbox would be approximately 3′ x 3′ x 8″. I am still unsure if I plan to use sensors or the Kinect. I found this Kinect example from UC Davis. It uses the depth data to project different colors like a topographical map. They also provided instructions on how they created their project.
Although, I would like to learn about how to use kinect for depth analysis, I think that the effect might have a more magical feeling if pressure sensors are used underneath the sand. They would be set up in a grid pattern to create However, I am worried that the movement won’t be able to be captured easily or accurately.
I think with both of these I am still confused on how to make a nice combined sound rather than it being just a bunch of noise. My idea is either to run it in a loop where it would hit each sensor in a sequence and play sounds in that order or I can do whichever is the tallest/heaviest is what noise is playing. For the latter, I think it could be the most effective for making a fluid sound but I am not completely sure on the logic of it all.
For now, I have started doing the MIDI lab on the p-comp website but stopped when I talked to one of the residents about using the MIDI device. I have plans to meet up with a resident to discuss this in further detail. I also plan to make the box for the sensor this weekend. If I can’t make a nice box, I plan to buy a sandbox turtle like this:
Sand Band (working title) is a musical sandbox. The box will be scaled to different section that has a corresponding musical instrument. As each user makes a shape or “castle” in the sandbox, the weight of the sand in each section will trigger the note of that the instrument plays. The sandbox will be big enough so that at least two users can play at a time. It is undetermined if the users will initially be aware of that they can create sound with their sand castle or if it will come out as they work.
An additional part of this is to connect it with Rebecca’s and Kevin’s P-Comp Finals. Rebecca’s project creates a visualization based on the user painting/moving pine needles on a canvas. Kevin’s is a water based project that would interact with both of ours. At the time it isn’t determine what the effects would be.
As I am writing this, I am becoming less and less sure that I actually want to do Sand Band. I will decide after play testing.