Saturday 3 September 2016

Investigating Loudness Normalisation in Spotify and Youtube

Lately I have been trying to bus compress my tracks to get them as loud as commercially released songs. This is quite frustrating as the amount of limiting required is bad for the sound. This got me thinking about the loudness war/loudness race and I had a look into loudness normalisation systems in Spotify and YouTube to see just how loud I really need to make my tracks... It's actually very interesting so I'll post what I found here.

The Loudness Race

Loudness is used specifically and precisely for the listener’s perception. Loudness is much more difficult to represent in a metering system. (…) Two piecing of music that measure the same on a flat level meter can have drastically different loudness (Katz, 2007, pg. 66).

According to Katz (2007, pg. 168), when two identical sounds are played back at slightly different levels, the louder version tends to sound ‘better’, although this is only a short term phenomenon. This has lead engineers to compete for loudness to attract attention in the jukebox/on the radio when the playback level is fixed, which is achieved through the use of compression and limiting on the master bus. Unfortunately, this can lead to fatiguing and eventually unpleasant sounding records, or in the extreme case, audible clipping, as in the famous example of Metallica’s “Death Magnetic” album (Michaels, 2008). The loudness race has also been present in television (with advertisements competing for audience attention) and radio (with stations competing for listeners to choose their station) (Robjohns, 2014).

Loudness Normalisation

The first attempt to implement some kind of regulation on loudness was the International Telecommunications Union’s (ITU) published standard, ITU-R BS. 1770, which in 2006 recommends a loudness metering algorithm to act as “electronic ears” to control the playback level of television advertisements. The algorithm uses a 400ms integration time, a two-stage filter to simulate the frequency response of the head and ears, and creates an average reading for the entire duration of the track (see figure 1). A gate is used to filter out quieter sections, so that they do not affect the reading (ITU, 2015). 

The ITU standard has been put to use in television, so that users do not need to reach for the volume control when changing channel or in between programs. Once the loudness of a program has been measured, a static level change can be implemented if it is too loud or too quiet, in order to reach a specific ‘target loudness level’, which for television is recommended to be -23 LUFS (loudness units relative to full scale) (Robjohns, 2014).

Katz (2007, pg. 172) predicts and end to the loudness race as consumer listening moves on from the compact disc to iTunes, whose “Sound Check” feature allows for the normalisation of the loudness  of audio tracks when in shuffle mode, removing any uncomfortable jumps in loudness when listening to music from different eras or genres. This processing is also applied to iTunes Radio. According to Robjohns (2014), iTunes normalises to a reading of around -16 LUFS using an algorithm that appears to be broadly similar to ITU-R BS 1770, although the details of Apple’s method are not publicly available. Robjohns suggests that this target loudness may be too low, as some devices restrict the maximum listening level to due loudness concerns. This means that users may not be able to compensate for the reduction in level. 

Other music streaming services have also started adopting loudness normalisation algorithms. For example, Spotify has the option to “Set the same volume level for all tracks”, which by default is switched on, however according to Shepherd (2015c), the reference level is too high; to keep a consistent level, Spotify adds extra limiting which can pump and crunch in an unpleasant way. YouTube has also implemented a volume matching system for music, which according to the findings of Shepherd (2015b) does not use ITU-R BS 1770 as the readings fluctuate between around -12 LUFS and -14 LUFS.

Figure 1. An outline of the ITU-R BS 1770 algorithm.

Procedure

To measure how YouTube and Spotify are normalising loudness, I used a program called SoundFlower to route the output of my MacBook in to Logic X and out again, where I could load Isotope’s “Insight” plugin for metering, which helpfully has a preset for measuring loudness using the ITU standard. Also shown are true peak values, which will show how much each track has been turned down.
Figure 2. Screen grab of the measurement process


SPOTIFY 

Track
Loudness
Peak Level
Notes
Closer by The Chainsmokers
-12 LUFS
-2 dBFS
Electronic/Pop (2016)
Didn’t Mind by Kent Jones
-10 LUFS
0 dBFS
Electronic/Pop (2016)
Final Song by Mø
-12 LUFS
0 dBFS
Electronic/Pop (2016)
Sunshine by Radio Edit
-10 LUFS
-1.1 dBFS
Electronic/Pop (2016)
That Was Just Your Life by Metallica
-12 LUFS
-6 dBFS
From “Death Magnetic”
Claire de Lune by Debussy
-17 LUFS
0 dBFS
Classical piano
Spotify advert
-6 LUFS
+1 dBFS

Don’t Let Me Down - Zomboy Remix
-12 LUFS
-3 dBFS
Dubstep

Spotify’s system appears to be working, keeping everything between -10 and -12 LUFS bar a few exceptions. The first five tracks were taken from a “New Music” playlist; these tracks were released this year. It is interesting to see that two of these tracks peaked at 0 dBFS, implying that Spotify did not turn them down at all; they were mastered to hit this target loudness. “Closer” and “Sunshine” were only turned down by 2 dBs and 1.1 dBs respectively, meaning they were not significantly louder. The Metallica track was taken from the notorious “Death Magnetic” album which whose over-compression brought the issues of the Loudness Race in to the public awareness. This track needed to be reduced by 6 dB in order to hit the loudness target.

To really test out the system, I also played a piece of quiet piano music (Claire de Lune by Debussy) which gave a reading of -17 dBFS. This falls below the desired loudness, but in practice it didn’t sound out of place; you kind of expect piano music to be a bit quieter. This seems to go against the claims of Shepherd (2015c) who claims that Spotify applied its own limiting to dynamic music. Either this is not the case, or certain types of music are exempt. It could be that classical music has its own loudness target that is lower than for pop music genres, due to the lack of bus compression used in classical music.

What let the system down was the exemption of adverts from the normalisation; from Debussy to Spotify’s advertisement for an electronic music playlist was a jump of about 11 LUFS which was  very unpleasant. 

YOUTUBE 

Track Loudness Peak Level Notes
This Is What You Came For by Calvin Harris -14 LUFS -4 dBFS Electronic/Pop (2016)
Closer by The Chainsmokers -15 LUFS -4 dBFS Electronic/Pop (2016)
YouTube advertisement for Sky TV -12 LUFS -4 dBFS Electronic/Pop (2016)
Heathens by twenty one pilots -13.5 LUFS -3 dBFS Indie/Pop (2016)
The Day That Never Comes by Metallica -14 LUFS -4.5 dBFS From “Death Magnetic”
YouTube advert -13 LUFS -4 dBFS

Rachmaninov - Piano Concerto No. 2 -17 LUFS 0 dBFS Classical piano with Orchestra

YouTube appears to aim for a lower loudness level than Spotify. YouTube music seems to fall in the range between -13 and -15 LUFS, which agrees with the findings of Shepherd (2015b) who quotes between -12 and -14 LUFS. The advantage of choosing a lower target is that there is now a smaller difference when you switch from pop music to uncompressed classical music. YouTube advertisements are louder, at around -13 to -14 LUFS, but the difference is not as much as Spotify. This gives a much better listening experience. 

Conclusion

The findings show that Spotify’s system aims for a loudness of between -10 and -12 LUFS. Limiting a track any further than this is would be pointless as it would just be turned down. The Metallica track had very audible clipping when played side-by-side with other tracks at a similar loudness. For release on YouTube, it would be better to aim for a level of between -13 and -15 LUFS to maximise the dynamic range in the track. For release on both mediums, I feel it would be best to aim for Spotify’s playback level, otherwise the track may sound quiet compared with other tracks, or subject to Spotify’s automatic limiter, if there is such a thing.

It is hoped that these systems will reduce the commercial pressure on mastering engineers to over-compress music at the expense of sound quality. In radio, there is still no published standard yet (Robjohns, 2014) and this is an important step in finally removing all demand for hyper-compression in mastering.

References

ITU, (2015). BS.1770: Algorithms to measure audio programme loudness and true-peak audio level. [online] Itu.int. Available at: https://www.itu.int/rec/R-REC-BS.1770/en [Accessed 17 July 2016].

Katz, R. (2007). Mastering audio. Amsterdam: Elsevier/Focal Press.

Michaels, S. (2008). Metallica album sounds better on Guitar Hero videogame. [online] the Guardian. Available at: https://www.theguardian.com/music/2008/sep/17/metallica.guitar.hero.loudness.war [Accessed 17 Jul. 2016].

Robjohns, H. (2014). The End Of The Loudness War?. [online] Soundonsound.com. Available at: http://www.soundonsound.com/techniques/end-loudness-war [Accessed 17 July 2016].

Shephard, I. (2015). YouTube loudness normalisation - The Good, The Questions and The Problem - Production Advice. [online] Production Advice. Available at: http://productionadvice.co.uk/youtube-loudness-normalisation-details/ [Accessed 17 July 2016].

Shephard, I. (2015b). YouTube just put the final nail in the Loudness War's coffin - Production Advice. [online] Production Advice. Available at: http://productionadvice.co.uk/youtube-loudness/ [Accessed 17 July 2016].

Shephard, I. (2015c). Why Spotify's 'set the same volume level for all tracks' option is back, and why it matters - Production Advice. [online] Production Advice. Available at: http://productionadvice.co.uk/spotify-same-volume-setting/ [Accessed 17 July 2016].

Sunday 21 August 2016

New video is out


Finally finished the mix for this song. The full version will be on the album/portfolio and it features a more complex orchestration, with drums, guitar, bass, piano, cello, oboe and flute - which have been now edited and are waiting to be mixed. However, this video is part of the promotional strategy: posting on Youtube acoustic versions of the tracks that will be available for purchase in full version later on. This also means that, when I put out a compilation of 'best of' acoustic performances, I'll be able to essentially sell the same song twice.

The video was finished a while ago, but I had trouble with the mix because:
1) piano was muddy and had lots of noises on it (pedal and chair squeaks)
2) vocals were very loud on all piano tracks, therefore pitch correction was very, very hard to do without the sound getting all phasy.

The first problem was fixed by using Izotope De-crackler to get rid of (some) noises, and by a lot of EQ-ing and making up for the cut out frequencies with the Waves Exciter.

The second problem was a bit more difficult to tackle, and, funnily enough, where both Auto Tune and Melodyne (!) failed, Logic X's Pitch Correction managed to fix the pitchy notes without being extremely obvious. I would have never expected it, but it comes to show that sometimes some plugins might serve certain purposes better than others due to their different algorithms.

Friday 12 August 2016

Band Recording

Today I've recorded a band. First recording of a full band I did at UH! I was a bit nervous, but going to Studio 1 and setting up everything yesterday saved a lot of hassle today. First of all, I had time to think over the placement of the band and to spend time positioning the mics. I recorded everything together, except for the vocals which were overdubbed later.

The problems started early in the day, when one of the signal from one of the overheads was not coming through. I switched the cables, tried a different input etc - no result. I was afraid the mic was broken, but when I switched them over, the other one wasn't working either. It turned out that some of the desk inputs weren't working. Later on, when I couldn't get the DIs to work, I realised what the problem was: the phantom power must be broken on some of the channels. KM184 and passive DIs didn't work on those respective channels, but an SM57 did. I was using Logic, so due to the setup I was limited to 16 channels - 12 counting out the broken channels.

I decided to ditch the DIs and only mic the amps and gave up the snare bottom to free up some channels. In the end, the setup was this:

1.Kick: Shure Beta52
2.Snare SM57
3.Hi Tom MD421
4.Mid Tom MD421
5.Lo Tom MD421
6.OHL KM184
7.OHR KM184
8. bass: DI
9. guitar 1 mic: SM57
10. guitar 2 mic: MD421
11. vox - Shure SM58 for the 'scratch' vocal, U87 for dubbed vocals

I have managed to get a surprisingly good sound! I don't think ever managed to get such a good sound for a band before, especially for the guitars. 
We recorded 2 songs. The first one went a bit slow, because I tried to get the band to play to a click, which they were not used to before. There were no vocals either, so they kept messing up the structure. With a bit of practice, they managed to stay on the click and I think they delivered a reasonably tight performance.
For the second one, I decided to give up the click and go for a more live feel. I also gave the singer an SM58 to sing in and put the vocals in the fold back. The result was a much more heartfelt performance and we finished the song in probably the quarter of the time we took for the first one. 
The next step was quickly choosing the best take with the band and overdubbing the vocals for the first song. Listening back to the second one, I realised that there were absolutely no bleed whatsoever from the scratch vocals, so muted them and overdubbed them with the much nicer U87. 
Due to how I set up the room, the instruments were pretty well isolated, even though they all played at the same time. I made a booth for the drums at the back of the room by using two panels, and placed the guitar amp in the opposite corner of the room, facing the treated wall, with the back to the drums and an acoustic panel behind it. The other guitar amp was on the other side of the same panel, but facing away from the other amp towards the booth of the drums. The bass was DI-ed. The vocalist faced the other corner of the room, next to the bass player, away from the guitar amps and the drums.

After recording vocals, we had enough time to play around with backing vocals, a whistle solo and some group clapping. Not sure how much of that will make it on the final recording, but we surely had fun!

Here is a link to the band's music: https://soundcloud.com/themarras1-1/almost-old

Tuesday 9 August 2016

The 'Lana del Rey' vocal sound

I've been thinking of trying to emulate Lana del Rey's vocal sound and see if it might work on the vocals for 'Let Go', so after a quick google search, I came across this article, which I found to be a very interesting read - I never knew the album version of 'Video Games' was actually the demo version. I never really noticed that the piano was sampled or listened carefully to the strings and harp to realise they were also sampled, and by using a IK Multimedia Miroslav Philarmonique, which I own and I always thought was inferior to East West libraries - which to my ears still sound quite 'fake'. This is why, for my songs, I've tried to use, as much as possible real instruments, and double the sampled strings with real violin, or create the illusion of an ensemble by layering many takes of the one violin.

Listening back to 'Video Games', the orchestral instruments  indeed sound quite 'MIDI', but I can tell there was a lot of effort into drawing in articulations. It's a simple arrangement, but work well and it was tastefully done - and it's encouraging to think that a track created in this way had such as massive exposure and made it on the radio.

I found this excerpt particularly interesting in relation to the Lana del Rey's signature vocal sound: 'The vocals were bussed to an aux track and we had several plug-ins on the bus. In addition to the high-pass EQ, we had a Waves de-esser to knock out some of the pops and hisses, and a Waves compressor with a light ratio but a heavy threshold, and then another compressor, the H-Comp, with a higher ratio. I like to run multiple compressors in a signal chain, with none of them compressing too heavily. It does not 'small' the sound as heavily as one maxed out compressor. After that, we had an H-Delay and an RVerb, just set to a hall reverb, mixed in really high but with the bottom and top end rolled off, so the main reverb you were getting was mid-range.'

The funny this here is that they didn't do anything particularly special to achieve that sound, and that is actually the plugins chain I normally use for processing my vocals. 



Here is the link to the article:
http://www.soundonsound.com/people/robopop-producing-lana-del-reys-videogames


Friday 5 August 2016

AV Session at UH (B01)

Along with working on my own music, I've been trying lately to put together some recordings and video for a future showreel that will hopefully go onto the 'Portfolio' section of my website and maybe will help get me some work recording or filming other people.

All the video experience I had before was from shooting my own videos (but, of course, someone else was filming me) and later on, editing them in Final Cut X. I find editing video very satisfying and I usually have a good eye for choosing angles and making people look good on camera, so I thought it would be a good opportunity to make some videos for other people using the facilities at UH. After getting trained on the C100 Cannon cameras at UH, I was ready to go.

This week I had two AV sessions, one solo piano and one for a singer-songwriter (vocals and electric guitar). I recorded the audio using the Soundcraft Vi2 desk in B03 and left the recording going throughout the several takes while I was in the performance space, manning the cameras. I used two cameras at the same time, one static, one moving and changed angles after each take. The audio was not recorded to a click, so the editing will be a bit challenging, but I have done such editing before for my own songs, and even though matching the audio and video takes takes more effort, I found that it's worth the effort to retain the 'live' feel of the performance.

For the piano session I used 2 pairs of KM184s (condenser, cardioid) - one pair for the spots (one mic for the low strings, one for the high strings) and one pair in an ORTF configuration (on a stereo bar, 17cm distance between the mics, angled away from each other at about 110 degrees) to pick up a more distant, roomy sound.
















For the second session I used an SM57 on the guitar amp and also recorded the DI-ed signal. For the vocals I used an U87 and, as a special request from the artist, an SM58 (he wanted the recording to resemble his live performances in which he alternates between two mics - one of them having a lot of reverb on it to create a special effect). I resolved to only use the signal recorded through the U87 but add reverb on the relevant bits.




Sunday 31 July 2016

Recording Brass for 'If Silence' and 'Falling Star'

A couple of weeks ago I recorded a small ensemble of brass players for two of the tracks that will be on 'City of My Mind', the album I'm working on - and part of my MSc portfolio.

The ensemble was formed by three trumpets, four saxes and one trombone. Every two instruments were mic'ed using AKG C414s and U87s.

The arrangement was written in Logic using sample instruments and then exported as a score.

Here is a video from the session, the chorus of 'If Silence':


Thursday 28 July 2016

Drum Recording

Today I recorded drums in Studio 1 at UH for 'If Only You Were Real' and 'Falling Star', two tracks that I'll be submitting for my Final Project.

I used less microphones than I would normally do, partly because there were not enough mic stands available, partly because lately I've changed my mindset in the sense that I'm trying to not use more mics when less will do the job.

Mic list:
Overheads: AKG C414 spaced pair
Kick: AKG D112
Snare: Shure SM57
Toms: Sennheiser MD421 x 3

Usually, to this I would have added a snare bottom, a kick out, a hi-hat spot and a pair of room mics.
Another thing I did differently to last time I recorded in Studio 1 is that I recorded onto Logic (I normally record in Pro Tools). However, the rest of my project is in Logic, so it made sense to keep everything in the same DAW. Now onto editing!





Tuesday 26 July 2016

Recording piano in B01

Today I recorded some piano for three tracks, one ('City of My Mind') is part of my project, two tracks will be just acoustic versions on which I'll overdub vocals later on and film videos for them to post on my Youtube channel.

The piano was recorded with a pair of Neumann KM184s placed under the hood of the piano and spaced out, to pick up the two main “clusters” of strings (the thinner strings for the high notes, located at the widest part of the piano, and the thick, low strings which reach to the end of the tail). These microphones provide a detailed, close, brilliant sound when placed here. I used anoter pair of KM184s with omnidirectional capsules, pointed at the tail end of the piano facing in. This combination really brings out the low end, and also provides a more ambient perspective. On its own, it’s actually quite muddy, but when mixed in a little can add body to the sound. Lastly, a pair of AKG C414s set to cardioid were used as room mics. These were placed in a near-coincident configuration (ORTF) to provide an accurate image of the piano. They were placed around three meters from the piano, and when mixed in gave control over the front-back perspective by introducing the natural reverb of the room.

Friday 15 July 2016

'City of My Mind' Percussion

Today was dedicated to editing the percussion I tracked a while ago for my song 'City of My Mind' and bringing everything into Logic.

Different DAWs ultimately serve the same purpose, but the functionality of some makes them, in my opinion, more fit for certain tasks; for this reason some of my projects go through several DAWs before everything comes together in one single project.

For example, this song was arranged and written in Logic 9; the piano and the percussion were recorded and edited in Pro Tools; the ‘strings’ (one violin recorded many times changing microphone positions to recreate the sound of an ensemble) was recorded in Reaper. Everything was then bounced as stems and brought together in a Logic X project to be mixed. The reason behind this was that I find Logic better for working with MIDI and composing, Pro Tools commands faster for editing and Reaper, as a free software, is very handy for recording on other computers or for transferring projects and stems from one computer to another, which is what happened when I recorded the strings.

This is a video from the recording session with Kohi, an amazing percussionist. I had fun recording him and also producing, which is apparent from the video.






Tuesday 21 June 2016

The musician as promoter in social media

For audiences, social media provides different ways of discovering new artists. For musicians, they become cheap and accessible channels of promotion. Online video platforms like Youtube allow musicians to post music or performance videos and create an image to share with their fans. In 2006 MySpace was the main website where musicians were sharing music; there, artists like Owl City or Arctic Monkeys built online communities and went on to be discovered and signed by major labels. Some of the popular music sharing websites today are Soundcloud, ReverbNation, or Bandcamp (the latter of which allows artists to sell their music on a pay-what-you-wish basis); they all provide sharing buttons, allowing sharing through social media websites, such as Facebook, Twitter or Tumblr; these websites provide a basis for grass-root marketing, as they facilitate the music and videos to be shared and spread via the online ‘word-of-mouth’.

Social media not only allows the sharing of one’s music, but the construction of an online brand and persona and the illusion of ‘fake intimacy’ built through online interactions with the fans. According to Baym (2012, p. 288), social media, such as Twitter, create a new expectation of intimacy. For musicians, since music is so easily shared online, this expectation to maintain ongoing connections and affiliations with their fans is even stronger. Marwick and Boyd (2011, p. 140) describe micro-celebrity as ‘a mindset and set of practices in which audience is viewed as a fan base; popularity is maintained through ongoing fan management; and self-presentation is carefully constructed to be consumed by others.’ 

The fan/artist relationship is essentially a customer/retailer relationship. As many musicians nowadays sell their music online, they need to build a relationship of trust with their customers. McCourt (quoted in Styven, 2007, p.60) points out that online services try to compensate for their lack of materiality by offering features such as personalisation and community functions. According to Koernig (quoted in Styven, 2007, p. 61) strategies for making the intangible more tangible often include the use of pictures, physical symbols, or facts and information. For an indie musician this translates into working towards building a personal brand and using social media to create an online presence; building a community around this online persona through the mindset of a ‘micro celebrity’ is an essential part in the process of self-promotion. 

Although social media is a valuable tool for reaching audiences and for receiving immediate feedback, maintaining an online persona and an ongoing interaction with the fans can prove very time-consuming and even difficult for some artists, who feel uncomfortable being social with strangers on the internet, as Baym’s study (2012) shows.

Monday 6 June 2016

Re-arranging, re-mixing and re-mastering and older track: Lotus Flower

As part of my new EP, I am including and older track (published last year on Youtube), which received good feedback from my online followers and was not included on any other release.
The track is called 'Lotus Flower' and its theme is social inequity - it explores the relationship between the less fortunate and the ordinary people, the main idea being the fact that usually we all pretend to care while at the same we are happy that we are not the miserable ones.

This is the track in its original version:



In this version, the track has quite an unusual structure for a song; I believe it is the longest song I have written, going over six minutes. 

What is specific to this song is that instead of the traditional alternation between verse and chorus, there is a constant switch between vocals and violin, the two leads having balanced importance. The timbre of the real violin adds a lot to the overall vibe and to the quality of the mix, providing an interesting contrast to the synths and drum sounds in Logic that form the rest of the backing. 

The intro is extended to about a minute and a half and it starts with a violin solo in 4/4; I wrote this part in the high register of the instrument to make it sound more wailing and emotional.  When the beat comes in, the drums and bass are in 5/4, while the violin and the piano are still in 4/4. There is also an arpeggiated synth/pad with a 6/8 pattern, which is present almost all throughout the song. 
The bass line constructs a melody in 5/4, while following the drumbeat in a way that makes the irregular rhythm feel almost natural.

                                 sample bassline:
The first verse is followed by a beautiful violin solo section, which leads into the second verse. An 8-bar interlude bar that allows a house beat to creep in, with a fade in; the house beat stays until the end of the section. The reversed piano that was introduced before, as a prefiguration of the second section, makes a second appearance and links between the two big sections of the song. The violin takes the piano line from the very intro and plays it, then dubs it up a third; this becomes the main motif around which the whole second half of the song is built. The rhythm changes to 12/8, matching the same piano pattern.  
                                           this piano motif:
is quantized in triplets and becomes this, in 12/8:
Maintaining the dialogue, the vocals imitate the violin, by singing these same notes and then dubbing the melody a major third up. To provide symmetry to the track, the drums add an extra 'beat' of 3/8, turning the rhythm into 15/8, an equivalent for the 5/4 we encountered at the beginning of the song.  There is symmetry in terms of melody as well; first we come back to the melody of the first verse, basically finishing the song with a third 'verse' which is followed by the violin playing the exact same melody that was played in the intro, with the same accompaniment and background. 

         Structure diagram of 'Lotus Flower' - original version:
The main influences for this track come from my past experience of listening Cafe del Mar and 'Absolute Relax' albums, the sort of chilled tracks with a solo violin playing the melody. I was particularly thinking of the Enigma - Secret Garden track and replicating that sort of sound, which I did by adding lots of reverb, also by boosting mids and lows on the EQ, because my raw sound was very screechy and thin. The piano responses to the violin in the second violin section solo were inspired by Moby's Porcelain and they are played in the same free manner, humanized in Logic around the beat. However, my track is not that 'chilled'; the vocals are in a pop-rock style, other influences come from acid house tracks and the shuffle beat is an influence from rock bands (song like Muse's 'Uprising' or Kasabian's 'Fire' come to mind).

                                             EQ on violin:

To make the track fit with the rest of the EP, I decided to re-arrange it; the mix could also be improved and, of course, if I was changing these things, I had to master it again.

I decided that the initial length was too much so I immediately decided to get rid of the long intro. In order to make the track more easy-listening, I got rid of all the unusual time signatures and transformed them all in 4/4 in the first half and 6/8, in the second half of the song. 

Because the mix was too cluttered, I got rid of some of the layers (particularly pads) and turned others down, keeping them in just to add texture. I turned the drums up and spent some time working on the drum sound with compression and EQ. I also worked on the vocal sound and added some pitch correction where needed.

For the mastering, I used a combination of Waves Multi Maximizer and FabFilter ProL (Limiter), which is a new plugin that I'm trying out. I found Pro-L very useful, as it has different templates which can work for different things - for example limiting or adding simulated tape saturation to the audio.

My mastering project for 'Lotus Flower':



I am very satisfied with my work of the last few days (and especially yesterday) because there is a massive difference between the two mixes. I have also managed to make the latest master as loud as some other commercial releases, which I couldn't in the past without making it sound over-compressed.

And this is the final result:

Wednesday 1 June 2016

Introduction

This blog will document my progress while working on a portfolio of songs to submit for my MSc Music and Sound Technology (Audio Engineering) final project.

My intention is to produce a studio EP or album containing from 5 to 10 tracks. The tracks will all be my original songs and I will aim to deliver them to a standard matching commercial releases in my niche, and eventually distribute them via online retailers and on physical CD via Bandcamp.

The style of music that I identify with at the moment is a combination or art pop, new age and electronic, so the arrangements will be varied, but still homogenous as a release. For example, one track contains orchestral strings and brass (recorded) arranged in a somewhat classical manner, but on top of an electronic beat. Most of the material is piano driven; some tracks will have electronic drums, but most will have real drums. Other instruments featured will be flute, oboe, violin, cello, percussion, etc.

There will be a great deal of recording involved, so there will opportunity to explore different recording techniques. Some of the parts though, such as bass and some guitars, will be recorded by musicians in their home studio and sent back to me.

My main focus will be on the production part, from arrangement and writing of parts, to working with the musicians to achieve the desired sound and, of course, mixing and mastering. A lot of the recording and sequencing work has already been done throughout the year, so in the following couple of months I will be mainly recording a few parts that haven’t been tracked yet, record all the vocals, mix, master and, of course, write the report.

At the end of the project, I will deliver the finished tracks ready to be released. 

As a singer-songwriter and producer, I am already aware of the business context of my work. I have already released two EPs via online retailers and I am monetising my Youtube account. The tracks resulted from this project will be part of my debut album ‘City of My Mind’. Some of the tracks will be published on my Youtube channel (which at the moment is only 1000 views away from half million), but I’m also intending to start a Kickstarter campaign for the album (even though it will be already recorded), to make sure I sell the number of copies that will come on physical CD (so for multiplication purposes), also to fund a music video.  I might also considering using some of the Kickstarter money to send the tracks for mastering to Abbey Road later on, but of course, for the final project version I will do my own mastering. I will probably also invest into some Facebook promotion which, hopefully will turn into more fans and come back as more sales. I already have a rough idea of my market audience, but am planning into exploring this more in depth.

Essentially, I will build upon my already existing online presence and create a product at the quality standard of a commercial release, suitable for my audience.

https://www.facebook.com/MissJewelia/
www.youtube.com/jewelia1
https://missjewelia.bandcamp.com/
www.jewelia-music.com
www.soundcloud.com/missjewelia