In October 2017, I attended the Processing Community Day held at the MIT Media Lab in Boston, Massachusetts. This gathering was the first of its kind, bringing together creative coders from across the country to share projects and experiences that use the popular programming language called Processing. I have used Processing both in my creative work and in teaching CSCI 111Q.
Taeyoon Choi and the other organizers of this event placed a heavy emphasis on networking, and from my perspective, it was a success. Attending this event allowed me to meet several individuals face to face that I have previously only conversed with online, as well as get to know new artists and educators using this programming language. It helped me lay the groundwork for new developments in my teaching and creative practice.
As part of my work for the Jamoma team in early 2015, I completed a 64-bit Mac update of an external from 2006 for vector-based amplitude panning by Ville Pulkki. This enabled other members of the team to use models that were dependent on the external with newer versions of Cycling’74 Max. After seeing my forum posts about this, I was recently contacted by someone on the Windows platform about getting an update for that version too. After first getting permission from Ville, I set up a new GitHub repository with continuous integration so that changes are automatically compiled.
A few people are still collaborating on documentation updates to flesh out the package, but the externals are ready for use at the repository link. If you download the updates and use them successfully, let me know. Also, if you know of other stray Max 32-bit objects out there that need a similar updates, please contact me.
vbap works with matrix~ to pan sounds across multichannel setups.
This year, the second semester of my Computer Music sequence (DIGA 462) was structured more as a peer working group. Students were responsible for developing two large-scale projects that fell within several pre-defined categories, and bringing their work-in-progress to class each week for comment and critiques. I worked on projects too and brought them in for students to comment. The student response to this format was very positive! For those that are interested, you can review the syllabus here.
In the second half of the term, we assembled our work into a final concert held off campus at Collective Church. The concert turned into a wonderfully authentic experience for the students. They had to develop an order for the concert program, gather the required gear, plan for marketing, and run the event. Documentation for this event was spread over several social media platforms, and links to these and a full-length video from the concert can be found online at digahertz.com.
The largest project for me in 2016 was another chance to collaborate with composer Virgil Moorefield. Like in 2014, I worked for months in advance of the proposed performance dates to achieve some of the effects Virgil wanted. Most of this preliminary work centered on programing for a Microsoft Kinect gaming system so that it could be used in his performance as a gestural control for visuals. This required many hours of research, programming and testing, but the results were very successful.
For the November performance in Zürich, Switzerland, I arrived about a week in advance so that I had several days to focus on final adjustments and getting the overall technical setup ready for the show. Here is a two-minute video of me doing a final systems check of the Kinect controller:
It is important to note that Virgil assembles a great team of people to perform for his concerts. The individuals that make up this team are seasoned and work at a high level of professionalism, which makes for a very stimulating work environment. Myself and other members of the technical team also supplied the physical labor to move all the necessary gear from Virgil’s studio to the venue. This 2-minute time-lapse gives you a glimpse of how much work was involved:
My role during the concert was as a visual performer, which made me responsible for the computer handling visual effects and cueing transitions throughout the performance. Moorefield is also great about documenting his concerts, and the video produced from this event is available online here:
In a recent update to Max, Cycling ’74 introduced the new Package Manager as an easier way for users to download third-party add-ons to their local machine. For the last few weeks, I have been consulting with Tim Place and Andrew Benson to make some improvements and additions to my own package on GitHub (including the addition of Windows support!), so that it could be included in this handy new feature. Now that this work is finished, I am honored to say that the LowkeyNW Max Package is now part of this “hand-picked selection of the finest Max add-ons”. If you are a Max 7 user and download the LowkeyNW package, please let me know what you think!
This spring semester, I once again lead our Digital Arts juniors in a service learning project at the local Boys & Girls Club. The Stetson students designed a game that could be used to introduce key programming concepts to the kids at the club using Processing. In addition, they were responsible for documenting their work by creating a website.
Website created by students to document the project.
Very proud of the work these students did! In addition to their website, you can learn more about the project from the following:
It was hard to take over this year’s DIGA 361: Audio Recording & Production course after the loss of my colleague Ethan Greene. He is still sorely missed by me and the students, who were all looking forward to working with him in the sound studio this term. In many ways, these are his students; I just took care of them for a few weeks.
Together, we pressed on to learn about the sound studio and completed 13 final projects that together make up another fine MP3 mixtape. The 2015 edition features witch house, break-up songs, drinking songs, holiday medleys, original sound tracks and so much more.
You can download the collection of final projects at the link below and easily import them into your iTunes library. Once you listen, please let us know what you think via comments on this post or sending us a message on Twitter. Enjoy!
The palm court is one of the most photographed locations on Stetson University’s DeLand campus. Pictures of it can be seen in brochures and on the website. Visitors pause by the fountain and try to frame the trees just right. New graduates rush to take one last photo with their friends after being dismissed from commencement. It’s one of those locations that seems to have a pull on people, demanding that they take their camera out and snap a photograph. This visual compulsion led me to question how I could respond with my own sonic action. I have always been interested in the ability of headphones to transport the listener to a new space and convey perspective sonically, and decided to create a piece conceived for headphones.
Between November 2014 and April 2015, I embarked on the task of recording three minutes of audio at each one of the 120 trees on the palm court. This resulted in over six hours of material that was then stitched together into five compositions according to five distinct sections of the palm court. The quick edits require listeners to constantly reorient their position within the palm court, a task which draws attention to the fountain’s sonic presence. This juxtaposition also reveals reoccurring sounds that add to the character of the palm court, such as the squeak of doors opening, the drone of planes flying overhead, and the whirr of lawn mowers on the ground. The longer segments let the drama of certain ephemeral events unfold, like people moving past the microphone’s fixed position and conversations captured that were never meant to be overheard.
Throughout the process of creating this piece, people repeatedly asked me, “What do the trees say?” I always gently corrected them by responding, “it’s more about what they hear.” After spending some time with this installation, my hope is that listeners can better appreciate the soundscape these silent witnesses inhabit.
During the first week in August, I spent the majority of my time installing every tree in the Hand Art Center. Below is a selection of images that I posted to social media to share the installation process with others along the way. Many thanks for the HAC staff for their assistance with the painting. This project will be open to the public on August 14.
One of the highlights of my work on Jamoma has been the workshops. Not only are they opportunities to visit interesting places, but they provide time and space to focus my efforts on coding with a set of truly remarkable collaborators. The workshops are unique in the way they get everyone involved in Jamoma energized about the project and propel things forward.
During our time together, we made some bold decisions about the future of the project that I am hopeful will improve the project in significant ways. They include:
Instead of 0.6-alpha, the next Jamoma release will be our 1.0-beta. There was a consensus that Jamoma has been in alpha long enough and now feels like it has a stable set of features. Moving to beta and then release will hopefully provide people outside the development team with more confidence in what we know to be a valuable tool for artist production.
The C++ core will undergo a major refactor called jamoma2, with an aim of transitioning to a headers only library. This should provide a unique solution for audio processing and make the Jamoma Core easier to incorporate in the development of plug-ins or apps.
My thanks to Tim Place, Trond Lossius, Jan Schacher and Max Mustermann for making the trip and being willing to think big! The months ahead look to be pretty busy as we work to follow through on these initiatives together.
Watching a Delta rocket launch during a break at the beach.