Sound Engineer Research – Tom Coyne

Tom Coyne is a mastering engineer who is well-respected in the music industry as he is widely regarded one of, if not, the best mastering engineer there is. He has been active for over 40 years, and specializes in R&B, Hip Hop and Pop. He started his career at Frankford/Wayne Recording Labs in 1978. His early career revolved around mastering dance music.

http://sterling-sound.com/engineers/tom-coyne/#biography

http://research.omicsgroup.org/index.php/Tom_Coyne_(music_engineer)

https://www.discogs.com/artist/272899-Tom-Coyne

Unit 11 – Final Project Proposal

Initial Ideas

Group LP
One of my initial ideas included creating a group LP with Ryan, Levi, Josh and Caesar. The concept behind the album would be to create music that does not conform to the stereotypical sounds of the genres that our individual backgrounds are in (to an extent). The LP would be a crossover of all of our genres and sounds. It is almost certain that whatever music we end up making will stand out as we’d all be inputting our variety of influences into each track. The project would be used as foundation influences for our future individual projects, allowing us to develop our own sound around people and places where we’ve actually seen.

Solo EP
Another initial idea included creating and releasing an EP. My lecturers advised strongly against an EP because there was a tendency for students who were making EP’s to only make tracks with no promotion, marketing strategy or even the intentions of releasing the music. Despite this, a solo EP is still under consideration for me because my production skills are approaching releasable quality and I intend on releasing music in my own time in the very near future anyway, so I would definitely have a marketing strategy as well as a great deal of primary research. I have already received literally 100% positive feedback from about 10-15 small artists in the same genre as myself, though some of those responses did include some constructive criticism, mostly regarding how my bass lines could include more elements to give my dynamics.

Idea
Proposal

My project is going to involve me creating sound and music for use with a film which I have been contracted to work on outside of college. I decided to use this as a basis for my final project because the project involves me having to learn new skills and techniques through trial and error which will inevitably help me gain experience for me to self reflect on in order to grow my craft as an artist, engineer and musician. This topic is also convenient for my final project because it means that I can streamline my college work and my personal projects into one, allowing for efficiency with the time that I have as well as a workflow that feels more natural.

Early Stages

The early stages of my work began way back in around October 2016. The project commenced with a private meeting with the screenwriter where he spent several hours briefing me with the concepts behind the film, the planning and work he had done so far. He showed me storyboards, drawn-screenshots, concept artwork and described the characters to me. The idea of this meeting was to convey the ‘spirit’ of the film to me so that whatever music I create will blend in with the film.

Project Concept

When in initial talks about this film and the soundtrack to it, the screenwriter sent me a playlist he created which featured a variety of music that he believed that the film should be influenced by. Unfortunately, most of the music that he showed me did not really resonate with me or my style of music as most of it was Indie-pop/rock. There was some music that did stand out to me, and that was the more jazzy and hiphop-esque kind of tracks that he showed me which was like a middleground between us. I asked him about any creative limitations I have with this film to see what I could and could not do, and he answered me by saying that he didn’t want it to sound ‘electronic’. This may look like I was the completely wrong person for the job, though ironically most of the tracks he showed me were electronically produced. Because of this, I decided to share with him some music that I thought he would like based on what he sent me, and it turned out that he liked almost all of them and thought that their style would suit the film. The music that I sent him was all electronically produced. From what I can make out from the music that he sent me and what he likes, what he meant by “don’t make anything electronic”, he really meant to not use obvious electronic sounds such as 808’s or 909’s.

I also figured that the styles he likes are often breakbeat-driven and Lo-Fi which may be why he thinks that the tracks he showed me weren’t electronic; most of which used synthesizers. Thankfully, I definitely like working with breakbeats and I do like mixdowns that aren’t overly-clean, so I still suit this project.

After this meeting, I decided to show the screenwriter some pre-existing tracks by other artists who have influenced me in general that I believe suit the theme of the film in order to find some common ground with the screenwriter. Doing this would help allow me to create music with integrity that also resonates with the film. Some tracks that the screenwriter loved include:

Hokusai (Source Direct) – Crystal House (1996)

Attica Blues – Tender (The Final Story) (1997)

Burial – Fostercare (2006)

Calibre – Reach You Everywhere (2008)
https://www.youtube.com/watch?v=SLJWMJTNhYQ

J Majik – Slow Motion (1997)
https://youtu.be/WaRsinS5B1I?t=56m37s

Portishead – Only You (2005)
https://www.youtube.com/watch?v=TmDkzVvherk

Payfone – Subcoiscient Lamentation (2014)

Photek – Neptune (1997)

Lion Youth – Rat A Cut Bottle (Dub Version) (1978)
https://www.youtube.com/watch?v=gw1Ll2vV32A

Vangelis/Kraftwerk/Origins Of Synthetic Music

Vangelis is a soundtrack composer for films. He is mostly a self-taught musician who played the piano from a young age. He avoided taking formal music lessons, and as a result, he never really had great music theory knowledge. He believed that music schools close doors rather than opening them. In 1949, he performed in a concert of 2000 people in his home country which was his first breakthrough as an artist. In the 1960s, he formed his own pop-rock band named ‘The Formynx’. Later in 1968, he formed another band called ‘Aphrodite’s Child’ which went on to be a hit, selling 20 million copies. He is notably known for being one of a handful of soundtrack composers that prefer to use synthesizers and sound effects as opposed to the traditional approach of using an orchestra. For this reason, Vangelis’ appeals to me as an artist as my interest in music mostly revolves around electronic and synthesized sounds. Not only that, but his style is mostly atmospheric with long, slow, sweeping, deep chords which have a somewhat emotional feel to them, which is partially similar to my production style in general. His style also uses subtle arpeggiated elements to give the sound more life to it. One of his most significant works was the soundtracks to the film ‘Bladerunner’ (1982). For me, it was the track entitled ‘Bladerunner Blues’ which really inspired me to undertake this project:

Vangelis – Bladerunner Blues (Bladerunner) (1982)
https://www.youtube.com/watch?v=RScZrvTebeA

This soundtrack stands out because its sound is far ahead of it’s time; it still sounds fresh today. It is also ahead of it’s time in the sense that entirely-electronic music was something that was only just beginning to take off in 1982. There was a handful of artists during and prior to 1982 who had produced music that was exclusively electronically sequenced, and despite groups such as Kraftwerk existing for over a decade at this point, even remote forms of electronic music (any music that uses synthesizers or drum machines) were only just about becoming commonplace, though they were generally not accepted warmly by the public at first. An example of early electronically-composed music can be seen here:

Monoton – Leben Im Dschungel (1980) (Example of early experimental electronic music)
https://www.youtube.com/watch?v=xPqe0C6PbvU

As you can see, early electronic music also developed ambient music in the process. That is because one of the notable traits of synthesizers was it’s ability to hold notes for as long as you like depending on what your ADSR are. Notable artists who pioneered early synth music include Brian Eno, Tangerine Dream, Erik Satie, Irv Teibel and many more.

Kraftwerk

Kraftwerk is a German Electronic music group hosted by Ralf Hutter and Florian Schneider in 1969 in Dusseldorf. They are known for being one of the first artists to create electronic music that would reach music charts and become accepted to the public. Their music in the 1970s was some of the first electronic music that would become favoured by the public and also was one of the most innovative artists at the time. Their influence came from the back of the German Rock music scene. Prior to albums where Kraftwerk used synthesizers, they were known as Organization, which was an alias they used for their LP named ‘Tone Float’. Tone Float was an album that consisted of experimental rock music.

Pre-Kraftwerk (1969)

Kraftwerk (1977)
https://www.youtube.com/watch?v=XMVokT5e0zs

You can hear the influence of the ‘pre-Kraftwerk’ years in their music where they were involved with electronic music, as their style maintained the cold, gritty and industrial sound. Not only were they pioneers in terms of being some of the first artists to release electronic music, but they were also part of a handful of people who had made dark music up to that  point in time, as music was typically quite laid back up until around the late 60s/70s when Punk Rock began to form; a genre which had influenced Kraftwerk. Despite this, they were also inspired by upbeat music at the time such as the funk music of James Brown. The theme of their music was often in regards to post-WWII European life, which both criticized and praised the ever-growing technological advances at the time. Their aesthetic style reflected this with futuristic ‘computerized’ robotic themes. Because of this wide range of influences and impersonal nature of their music, Kraftwerk received great praise from artists from a variety of backgrounds across the globe, which inherently resulted in Kraftwerk becoming arguably the most influential group of musicians in the 20th century, onto the 21st century. Although they were already universally one of influential artists in modern history, they were in essence fundamental to genres such as Techno, Electro,

Equipment For This Project

The music for this project will be created with Logic Pro X as that is the DAW that I specialize in, however I wanted my work to connect with the history and the artists that have influenced me in this project, so I also intend on using the synths we have in the college such as the Dave Smith Prophet and Roland V-Synth, though I would also like to use the electric piano stored in the live performance room to layer over the top.

References, Bibliography etc

Klaus Kehrle. (Unknown). Tone Float – Organization | Songs, Reviews, Credits | AllMusic. Available: http://www.allmusic.com/album/tone-float-mw0000458850. Last accessed 4th March 2017.

Justin Dicenzo. (2012). Top 10 Synth Pioneers. Available: http://justindicenzo.com/2012/03/06/modern-monday-top-10-synth-pioneers/. Last accessed 3rd March 2-17.

Matt Hubbard. (2014). Sound Design 101: Making Your Film Sound Great. Available: https://www.premiumbeat.com/blog/sound-design-101-making-your-film-sound-great/. Last accessed 11th February 2014.

Smithy Sipes. (2015). Sound Design. Available: http://www.independentfilmadvice.com/sound-design/. Last accessed 21st February 2017.

Wikipedia. (2017). Ambient Music. Available: https://en.wikipedia.org/wiki/Ambient_music. Last accessed 27th February 2017.

Ben Burtt. (2009). Ben Burtt demonstrates how he made Wall-E.Available: http://benburttinterviews.blogspot.co.uk/2009/02/ben-burtt-demonstrates-how-he-made-wall.html. Last accessed 26th February 2017.

Micheal Coleman. (2016). SoundWorks Collection: The Sound of The Revenant. Available: https://vimeo.com/150150229. Last accessed 1st March 2017.

Sideways. (2016). Theme vs Leitmotif. Available: https://www.youtube.com/watch?v=qVlsIhbQ2qM. Last accessed 4th March 2017.

LightsFilmSchool. (2013). Sound Design Tutorial For Film: Audio & Pre-Production. Available: https://www.youtube.com/watch?v=BWN3RJGUetk. Last accessed 5th March 2017.

Esteban Batres. (2012). Sound Production of The Hobbit: An Unexpected Journey. Available: https://www.youtube.com/watch?v=xMKjPWQuBFs. Last accessed 8th March 2017.

 

 

Unit 9 – Developing Performance + Production Skills

Week 1

The starting week to our live performance project was spent learning how to use Ableton Live 9. The key technique we learnt was how to warp audio files; essential for remixing and performing live tracks. We was introduced to the hardware equipment such as the Akai MPC 5000 and the Dave Smith Prophet. We was assigned to our groups for this project. I was in a group with Yellow Panther who is a Trap producer. We made a great group because although we produce different genres of music from each other, we’re both drawn to darker and moodier themes, so it wasn’t hard for us to agree on anything in the project. We was instructed that we would have to bounce each channel in our Futureshock remix tracks into 8 bar stems as we should be performing the remix live, and will need them to do this. In essence, the first week was dedicated towards familiarising ourselves with the tools we had, and creating a foundation for our projects to stem from.

Week 2

Week  2 was used to shorten our remixes into 8-bar loops per channel to include all elements of track in a summarised package. Each track would be bounced individually into stems, ready to import onto Ableton. We then had to warp all of our samples to the same BPM (80 in our case) so that they would all be in time. Because 165 BPM (the BPM that my remix was) is so much faster than 80, it made more sense to me to just warp them down to 160 as the rhythms will still feel pretty much the same, and the distortion from stretching a digital file would be kept to a minimum. This process took a long time because my track used a lot of channels (20+ channels), so eventually I ended up having to decide which sounds I could realistically see us using. This process ended up taking up our entire lesson time, so we had to import our stems into Ableton in our own time.

Week 3

Week 3 was our opportunity to utilise the equipment at the front of the synth lab. This equipment included Korg Volcas, Roland V-Synth, Akai MPC, Moog Bass Synth, mixer, speakers and midi controllers. We imported our own drum kits and chopped samples into the MPC so we could play the MPC like a controller. We spent this week getting to grips with the equipment. We explored a range of different settings on each piece of equipment throughout the lesson to try to understand the types of sounds that could be created, and to identify which equipment is appropriate for selected types of sounds. This was our first experience of actually playing hardware together, so it was a learning curve in terms of co-ordinating each other, musically-speaking.

Week 4

In week 4, we used the main equipment in the centre of the synth lab, which is essentially the same as the setup at the front of the room, except it does not have a V-Synth or a Moog. By this point, we was understanding how to use the equipment properly and manage to peform legitimate tracks with them. At this point, we were getting used to playing together, so we began understanding each other’s different styles, which meant that we was gradually realising what worked musically and what did not. We tried to remember chords and drum patterns worked, and acted immediately if we did not, and just switched up our sequence without second-guessing ourselves.

Week 5

By week 5, we had already experienced all of the equipment avaliable to us, and decided to start narrowing down which equipment we liked and which we did not like in order to maximise our focus onto equipment we was definitely going to use. We decided that we was definitely going to use the V-Synth, MPC 5000 and Korg Volca Beats, however we was also considering using the Dave Smith Prophet 9, Moog Sub 37, Korg Volca Bass, and an Alesis MIDI Keyboard, but weren’t entirely sure as the sounds that those instruments created didn’t exactly resonate with us. Like Week 4, we jammed out during this week as well. As you’d expect, our teamwork and co-ordination skills were improving as we was realising more and more about each other’s musical styles.

Week 6

Week 6 was spent noting the settings and presets we liked on all of the equipment we was planning to use so that we would be immediately to jam out on the day of the performance, and not have to worry about being forced to work with bad sounds. We eventually decided that every piece of equipment that we was sceptical over using on the previous week would no longer be used by us for this project as we felt it made more sense to maximise our time mastering the equipment with sounds that we knew we liked than sacrificing time on sounds that aren’t stimulating for us that we’d probably drop later down the line anyway. Like the previous two weeks, we also jammed out on this week. We began memorising sequences and chords we were playing, so that we could form actual songs.

Week 7

Week 7 was our final week for practising. On this week, we incorporated Ableton into the rig to include some of our samples as melodic elements on the tracks that we had been practising in previous weeks. Our use of the samples was meant to be stripped-back and minimalistic because neither of us really liked our remixes that much, and were more keen to experiment and use the equipment to make something new. This opened doors for our creative potential to be reached without being restricted by essentially carbon-copying previous work, which felt natural. To ensure that they were clearly remixes, we decided to select samples that were prominent in both the original Futureshock track, and our own remixes. We also made final decisions on the presets we would be using on the day of the performance, as well as which of our own samples to use.

Performance Day

On the day of the performance, we arrived to college early (9AM) so that we had plenty of time to prepare the equipment for the event, which would start at 11:30AM. The first step was to transport the equipment from the music department downstairs into the cafeteria. The equipment included;

– Apple Mac w/ Ableton Live 9 installed
– Ableton Push
– 12 Channel Mixing Desk
– Korg Volca Beats
– Korg Volca Bass
– Korg Volca Keys
– Korg Kaoss Pad
– Roland V-Synth
– Roland Octo-Pad
– QSC Active Speakers (x3) (1 monitor, 2 main)
– Peavey Amplifier
– M-Audio Keystation 49 MIDI Keyboard
– Moog Sub 37 Bass Synth
– Dave Smith Prophet 9 Synth
– Pioneer XDJ-1000 (x2)
– Pioneer CDJ-1000 (x2)
– Pioneer DJM-750
– Shure SM-58 Performance Microphone
– Akai MPC-5000 Drum Machine
– Various Power Cables (Kettle leads etc)
– Socket Extention Bars (x4)
– Various Array of 1/4″ Jacks/Phono/2.5mm Jacks etc etc
– Tape
– Tables

Performance evaluation

I think that my performance could have been better. I think this because the volume levels of the channels were essentially randomized with no one to engineer the sound. This meant that whenever we’d play a new piece of equipment during the performance of our track, it would often come in way too loud or inaudible. We’d respond by turning our attention to the mixing desk to figure out what channel the problematic instrument’s signal was assigned to. This caused us to waste time diverting our attention away from playing the actual music, and also meant that the performance seemed unprofessional from the audience’s perspective. This issue could have been rectified if either we had a chance to balance the instrument’s sound levels beforehand and make notations of the actual desk fader levels so that we could set them properly prior to performance, or if someone was nominated to engineer the sound live for the performers as you would with an actual live band.

Also, the tables that the equipment was laid on had an awkward height, so I had to bend my knees, tilt my neck at almost 90 degrees, and stretch my arms to reach the equipment. This could become distracting and meant that I could not move to the music, which impaired my sense of rhythm/timing, resulting in almost-off-beat rhythms in the music.

On a positive note, our notations of equipment settings we liked meant that we could create and structure our tracks with ease.

Performance Video

Our performance started at 10:59 and ended at 26:00

Unit 10 – First Choice University

My 1st choice of university would be the BIMM Brighton. This is because it’s music department has a strong reputation and is regarded as one of the best universities to study for music technology. It is a small, music specific university which has a great deal of links into the music industry. It’s location is ideal as Brighton is a strong cultural hub with a quickly growing music scene. Another reason it is an ideal location is because Brighton university is also there, so there are going to be a lot of young people who are the same age as me who will be willing to break into the film, gaming and media industries where  at some point they are going to need someone who knows how to use a DAW to create soundtracks or special effects for their work. This allows me to have a foot in multiple creative industries.

Unit 10 – Social Media Profiles

 

https://soundcloud.com/crystal-pressure

Soundcloud

Soundcloud is a free service designed specifically for artists to post music onto the internet so that is publically availible for the world to listen to. It can also be used to post mixes onto, though Soundcloud has a notorious reputation for removing user’s mixes due to copyright infringement for essentially uploading other people’s music. Personally, I have been uploading mixes to Soundcloud monthly for almost 1 year now and never had Soundcloud remove any of my mixes, so perhaps I have either been lucky, or the reputation Soundcloud taking mixes down is overhyped. I have managed to build a handful of contacts with Soundcloud (or at least got some follows by somewhat established artists who have released music) so it is a useful tool for linking artists together, and the ability to message other users privately allows ease of communication to other artists. Unfortunately, Soundcloud requires you to pay either £4 per month or £8 per month if you want to double your upload time or have unlimited upload time, as well as access to statistics that can help you understand the demographics you are appealing to.

https://www.facebook.com/CrystalPressure/

Facebook

Facebook is a social media website designed primarily for general social networking among people you probably already know, however it does allow for brands or artists to promote themselves via pages. In the above link is my own page, but I haven’t really done anything with it yet, hence why it’s nothing more than a bio at this point in time. On the contrary, I have used my personal Facebook profile to network with people involved in the music via the groups function. The groups function allows people to create groups regarding a specific topic where people of the same interests can communicate and share new music. I have managed to network with some established members of the music scenes regarding the genres I am into.

Twitter

Twitter is a social media website which is similar to Facebook, except that it is considered to be less private than Facebook, which is better in terms of running an artist page. Similarly to my Facebook page, I have not really used it at all so far in terms of an artist as I don’t feel I am in a position to promote myself yet. It is very impersonal which means that it is not as good as Facebook in terms of networking, but it is useful supposing that you are an established artist looking to connect to your fans.

 

 

Unit 10 – Personal Development

My personal development target is to move away from London in order to refresh my surroundings to re-influence myself in a new way. For me, ideal locations for university would be either Bristol or Brighton. Bristol on one hand is a city with a thriving musical culture which is similar to London’s own culture, except they are not directly influenced by London artists; it’s more of a parallel. Brighton on the other hand tempts me because it is a quickly growing hotspot for underground music clubs and events, so it would be wise to move there as soon as possible to have a head-start in a potentially big scene. I am also tempted by locations such as Birmingham or Manchester due to their reputation for electronic and urban music.

 I also intend on travelling to different cities in the future in order to experience the music scenes they have in a raw upfront perspective. Cities I would be interested in travelling to include Detroit for it’s long history in music and it’s influence it has had on world, Amsterdam for it’s welcoming attitude towards music, clubs, and culture in general, and Tokyo as a wildcard type of place to travel, and the culture shock effect that I’d experience from being a westerner in a completely different country with it’s own unique culture. I am also interested in travelling to LA or New York because of their reputation for film, TV and game industries, as well as their melting pot of musical influences. I am also open to the idea of migrating away from England should I ever find a place that stimulates me musically, has reasonable living costs, and has a strong support for it’s own culture.

Music CV – Curriculum Vitae

Name:
Charlie Tandacharry
Email:
xxxxxxxxxxx@xxxxx.com
Home Address:
xxxxxxxxxx
Phone Number:
xxxxxxxxx

I am an individual that has strong interests in music, philosophy, psychology, modern history, art, and film. I am very adaptive towards people and tasks which inherently means that I am consistent and reliable in terms of my attitudes to work, and can get along with virtually anyone making me a great team member. I am accommodating, knowledgeable, intuitive, friendly, positive, considerate and trustworthy.

 I spend a great amount of my free time perfecting my craft when it comes to music production as I am a perfectionist, and I strongly believe there is always room to improve. I dedicate myself because I want to be the best that I can be and because I enjoy producing as it is an outlet to express my experiences, feelings and thoughts, as well as to stimulate my creativity. I always try a new technique of some form when it comes to each of my productions; often techniques which I imagined on my own rather than trying to emulate another producer’s style. That does not mean however that I am averse towards cultural influences; I entirely embrace influences, but I try to avoid creating generic productions.

 I am also a DJ who can mix on vinyl and CDJ’s. I enjoy mixing in my free time, and am beginning to take DJing to the next level and sourcing members of the of the music industry who are involved in the style of music I am particularly interested in producing and DJing, and negotiating sets for me to play both at gigs and on online radio. I have performed as a DJ multiple times in public settings such as the ExCel centre to represent my college’s music department.

 I am currently working on a film as a soundtrack composer and a foley effects designer. Along the process so far, I have not only been producing music and sound effects for this film, but have also gained an insight into the film industry  by regularly attending the set to understand the ‘vibe’ of the film, which I can transfer to the productions for the film soundtrack. Working on a film set has allowed me to develop my communication and prospecting skills to interpret the moods and emotions that the film is supposed to evoque and create a soundtrack that would express the emotions of the film. These techniques are transferable to sound design for games and TV.

 I also have experience with live sound engineering and undertaking the role of a technician for music events such as Ox-Jam. I picked up the skills in live sound engineering very quickly, and can identify causes of issues relatively easily.

Education

The Norwood School (2009-2014)
GCSE Mathematics: Grade B
GCSE Photography: Grade B
GCSE English Language: Grade B
GCSE Chemistry (Triple Science): Grade C
GCSE Physics (Triple Science): Grade C
GCSE English Literature: Grade C
GCSE French: Grade C
GCSE Statistics: Grade D
GCSE Religious Studies: Grade D
GCSE Geography: Grade D
GCSE Biology (Triple Science): Grade E

Richmond-upon-Thames College (2014-2015)
BTEC Level 3 Engineering Diploma (90 Credits)

South Thames College Wandsworth (2015-2017)
UAL Level 3 Music Production/Technology (180-420 Credits)

Unit 9 – Advanced Production Techniques

The process of remixing the track begins by loading all of the individual stems from that track into audio channels Logic Pro X. Because there are so many stems used in this track, I had to go through the process of listening to each and every stem individually to see if I liked the sounds or could see potential in sounds, and then deleting stems which were irrelevant to me. This is because stems are large files that can take a long time to transfer (potentially over an hour), and also because having more stems in Logic Pro X means that your Mac will have to endure a higher strain on it’s processor. Another reason it helps is because it clears up clutter on the arrangement window.

Most of the samples I decided to keep were sound-effects and atmospheres rather than actual melodic material or sounds that were dominant in the original mix. This is one of the reasons why my remix has very little resemblance to the original mix. Next, I had to use the time and pitch machine found underneath the arrangement page within the functions tab to adjust the tempo of all the audio files to the tempo I wanted to work at (172 BPM to 165 BPM). For sounds that were relatively short, I cut them into very short wave forms so that there was only one note or tone that played so that I could eventually loop them in a way that makes the sound sound seamless and ready for use with the EXS24 sampler as an instrument. Once these single-note wave forms were cut, I would bounce that short file out of Logic Pro X  as a wav file, then create a second audio channel and import that file into the second channel. The new file in the second channel would be reversed using the reverse tool in the functions tab that is found waveform window that can be found under the arrangement space. The beginning of this file would be placed at exactly the end of the original forward-ways file to create a seamless blend. Both files were looped in the same pattern to the point that the loop would last around a minute to insure that it is long enough for the higher notes in the EXS24 sampler in  case I want to play high notes. I did this process for pretty much all of the sounds I sampled excluding the vocal loop and a downward-spiral sound effect.

Next, I create an auxiliary bus on one of those two channels, and then send both channels to the same auxiliary bus. I then open up the virtual mixing desk and increase the signal sent from each channel to the new auxiliary bus to around 0dB, and then decrease the volume of both channels to compensate for the volume increase. This allows the sounds from both channels to be combined in one channel (the auxiliary channel), meaning that you can apply an effect on the auxiliary channel, and it will affect all signals that have been sent to the auxiliary equally depending on how much volume was signalled from the original channels. On the auxiliary channel, I applied some regular large-hall reverb using the space designer to help smooth out the transition between the two files. I then applied a special drone-tone reverb that can be found on the space designer to completely change the tone of the sound. I applied a drone that made the auxiliary channel sound like an electric-piano chord. I used this same process with some of my other samples I had, which allowed me to make a string sound and a choir sound as well. Once these sounds had been created, I bounced the auxiliary channels into their own audio files, then imported these new audio files into new audio channels I created, ready to be transferred to EXS24 samplers to be used as instruments.

At this stage, the rest of the production was down to creativity/musicality rather than any plugins or effects (or at least not much outside minor things such as reverb’s and echo’s). All of the sounds used in my remix excluding 2 breakbeats and a sine-wave bassline were sounds that originated from the original track I was remixing. Once I had finished creating the structure of the track, it was time to engineer my track so that the sounds are all balanced out. The first thing I did was adjust the volume levels of each individual channel so that they are approximately balanced. Next, I created several summing tracks for the different type of instruments,  one for the drums, one for the melodic content, and one for the atmospheres. I inserted all of the tracks into their appropiate summing group, and then applied a compressor to each summing track. I would then solo one summing group at a time, and adjust the compressor to mesh the sounds together. This mean using very low threshold and ratio settings as I didn’t really want a noticeable ‘sucking’ sound, but rather glue compression so that louder sounds do not overshadow quieter sounds.

Once this was done to all 3 summing groups, I created a new bus for all the summing groups and bass (which I had excluded from all summing tracks), and set the signal send level to 0dB on all 4 channels, and reduced the channel volume to compensate for the volume changes. I then opened up the mixer and added a compressor to the new auxiliary channel so that ALL of the sounds will be meshed together now that they have been meshed together in their designated groups. The compressor settings were subtle (low threshold and ratio settings) on this auxiliary channel too because I only wanted the separate groups to mesh together without changing the compression of the individual groups too much. Once this was done, I was ready to bounce out my track as an audio file.

One thing I did notice is that my headphones seem to drastically under-represent the low sub-bass frequencies. This is relevant because I used only one set of headphones throughout the whole production process of this track. I noticed this issue when I played the track through a hi-fi system which I was familiar with. This has happened with several of my tracks before. Unfortunately, I only realised this fault close to the deadline of this work, so I was unable to make any mix down changes as I would have to keep making volume edits on the bass, bouncing out the track, transferring it to the computer that the hi-fi is connected to, and go back and forth with this process until the balance is right. This can take hours, and the computer connected to the hi-fi is more-often-than-not occupied by somebody.

The style I remixed my track into was D&B like the original, except that mine was slightly leaning towards Jungle; a genre of music that D&B originated from, and is often regarded as a sister genre to D&B. Current trends in D&B/Jungle include taking influences from early Jungle/Rave music such as using raw breakbeats with minimal processing, which I have done so in my track. There is also a recent trend with hybrids of Jungle and Footwork taking place, taking more musical influence from Footwork’s hometown of Chicago; which has a tendency to use instruments such as electric pianos or wide, deep synth pads reflecting earlier music from Chicago such as House or Soul. I have also used a similar sounding electric piano and deep synth sounds. There is a current trend of using sinewave basslines in particularly in Hiphop. This is nothing new, but is just very prominent lately, and is an element I have incorporated into my track.

Examples of recent music with these features
https://soundcloud.com/platform/machinedrum

A technique I have used which is relatively unique to Jungle is breakbeat chopping and layering. This is where you move transients of a breakbeat file in time, and cut breakbeats so that all cuts are equal in length and in time, then creating your own patterns using that breakbeat. Layering is where you layer two or more breakbeats over the top of each other to give the rhythm a ‘thicker’ sound.

Links to the remix bounces:

https://soundcloud.com/crystal-pressure/futureshock-drums/s-yCUb9
https://soundcloud.com/crystal-pressure/futureshock-instruments/s-mS26B
https://soundcloud.com/crystal-pressure/craggz-x-parallel-forces-futureshock-crystal-pressure-remix/s-FRbn2

Unit 10 -Preparing For Progression

One of my role models in music is a duo whose main moniker is known as Source Direct. Source direct were a well-respected duo in Jungle/D&B’s early days (1994-1997). They were the pioneers of dark but deep styles of Jungle, which was different from the more common reggae-influenced Jungle at the time. They originally produced their music with a home studio they had built themselves.

They initially pressed white labels/dubplates of their records which was standard at the time to give exclusively to DJ’s such as LTJ Bukem as their early style was much closer to the style of atmospheric D&B that LTJ Bukem was pioneering as opposed to anything else. Eventually, dark offshoots of Jungle began to emerge in 1995 from labels such as Metalheadz.

The Essential… Source Direct

https://www.discogs.com/artist/301-Source-Direct

Ideally, I would like to take music production and DJing to a professional level and generate an income that is enough to support myself financially, however, it is extremely difficult to do so as  there is much more artists in the present day in comparison to 10 or more years ago, and consumers often illegally download pirate copies of music through the internet. The music I am interested in producing and DJing is relatively niche so that is another difficulty that I would have when it comes to generating enough money to live on.

I also have an interest in film and games, so I am interested in creating a main income as a sound designer/foley effects producer for picture. I choose this path because I am currently creating the soundtrack for a film that a friend of mine is producing so I already have some experience, and I have enjoyed the process of really analysing the script and discussing the moods of scenes to understand the type of music that would suit the scene and the visions and concepts of the other creators who are creating the film. This means that sound to picture has a very humanistic aspect which I am strong with as I am a very perceptive person. It also leads me to an open door to other roles in creative industries which can allow me to develop other skillsets giving me a diverse portofolio and an attractive amount of experience towards employers.