Sunday 21 August 2016

New video is out


Finally finished the mix for this song. The full version will be on the album/portfolio and it features a more complex orchestration, with drums, guitar, bass, piano, cello, oboe and flute - which have been now edited and are waiting to be mixed. However, this video is part of the promotional strategy: posting on Youtube acoustic versions of the tracks that will be available for purchase in full version later on. This also means that, when I put out a compilation of 'best of' acoustic performances, I'll be able to essentially sell the same song twice.

The video was finished a while ago, but I had trouble with the mix because:
1) piano was muddy and had lots of noises on it (pedal and chair squeaks)
2) vocals were very loud on all piano tracks, therefore pitch correction was very, very hard to do without the sound getting all phasy.

The first problem was fixed by using Izotope De-crackler to get rid of (some) noises, and by a lot of EQ-ing and making up for the cut out frequencies with the Waves Exciter.

The second problem was a bit more difficult to tackle, and, funnily enough, where both Auto Tune and Melodyne (!) failed, Logic X's Pitch Correction managed to fix the pitchy notes without being extremely obvious. I would have never expected it, but it comes to show that sometimes some plugins might serve certain purposes better than others due to their different algorithms.

Friday 12 August 2016

Band Recording

Today I've recorded a band. First recording of a full band I did at UH! I was a bit nervous, but going to Studio 1 and setting up everything yesterday saved a lot of hassle today. First of all, I had time to think over the placement of the band and to spend time positioning the mics. I recorded everything together, except for the vocals which were overdubbed later.

The problems started early in the day, when one of the signal from one of the overheads was not coming through. I switched the cables, tried a different input etc - no result. I was afraid the mic was broken, but when I switched them over, the other one wasn't working either. It turned out that some of the desk inputs weren't working. Later on, when I couldn't get the DIs to work, I realised what the problem was: the phantom power must be broken on some of the channels. KM184 and passive DIs didn't work on those respective channels, but an SM57 did. I was using Logic, so due to the setup I was limited to 16 channels - 12 counting out the broken channels.

I decided to ditch the DIs and only mic the amps and gave up the snare bottom to free up some channels. In the end, the setup was this:

1.Kick: Shure Beta52
2.Snare SM57
3.Hi Tom MD421
4.Mid Tom MD421
5.Lo Tom MD421
6.OHL KM184
7.OHR KM184
8. bass: DI
9. guitar 1 mic: SM57
10. guitar 2 mic: MD421
11. vox - Shure SM58 for the 'scratch' vocal, U87 for dubbed vocals

I have managed to get a surprisingly good sound! I don't think ever managed to get such a good sound for a band before, especially for the guitars. 
We recorded 2 songs. The first one went a bit slow, because I tried to get the band to play to a click, which they were not used to before. There were no vocals either, so they kept messing up the structure. With a bit of practice, they managed to stay on the click and I think they delivered a reasonably tight performance.
For the second one, I decided to give up the click and go for a more live feel. I also gave the singer an SM58 to sing in and put the vocals in the fold back. The result was a much more heartfelt performance and we finished the song in probably the quarter of the time we took for the first one. 
The next step was quickly choosing the best take with the band and overdubbing the vocals for the first song. Listening back to the second one, I realised that there were absolutely no bleed whatsoever from the scratch vocals, so muted them and overdubbed them with the much nicer U87. 
Due to how I set up the room, the instruments were pretty well isolated, even though they all played at the same time. I made a booth for the drums at the back of the room by using two panels, and placed the guitar amp in the opposite corner of the room, facing the treated wall, with the back to the drums and an acoustic panel behind it. The other guitar amp was on the other side of the same panel, but facing away from the other amp towards the booth of the drums. The bass was DI-ed. The vocalist faced the other corner of the room, next to the bass player, away from the guitar amps and the drums.

After recording vocals, we had enough time to play around with backing vocals, a whistle solo and some group clapping. Not sure how much of that will make it on the final recording, but we surely had fun!

Here is a link to the band's music: https://soundcloud.com/themarras1-1/almost-old

Tuesday 9 August 2016

The 'Lana del Rey' vocal sound

I've been thinking of trying to emulate Lana del Rey's vocal sound and see if it might work on the vocals for 'Let Go', so after a quick google search, I came across this article, which I found to be a very interesting read - I never knew the album version of 'Video Games' was actually the demo version. I never really noticed that the piano was sampled or listened carefully to the strings and harp to realise they were also sampled, and by using a IK Multimedia Miroslav Philarmonique, which I own and I always thought was inferior to East West libraries - which to my ears still sound quite 'fake'. This is why, for my songs, I've tried to use, as much as possible real instruments, and double the sampled strings with real violin, or create the illusion of an ensemble by layering many takes of the one violin.

Listening back to 'Video Games', the orchestral instruments  indeed sound quite 'MIDI', but I can tell there was a lot of effort into drawing in articulations. It's a simple arrangement, but work well and it was tastefully done - and it's encouraging to think that a track created in this way had such as massive exposure and made it on the radio.

I found this excerpt particularly interesting in relation to the Lana del Rey's signature vocal sound: 'The vocals were bussed to an aux track and we had several plug-ins on the bus. In addition to the high-pass EQ, we had a Waves de-esser to knock out some of the pops and hisses, and a Waves compressor with a light ratio but a heavy threshold, and then another compressor, the H-Comp, with a higher ratio. I like to run multiple compressors in a signal chain, with none of them compressing too heavily. It does not 'small' the sound as heavily as one maxed out compressor. After that, we had an H-Delay and an RVerb, just set to a hall reverb, mixed in really high but with the bottom and top end rolled off, so the main reverb you were getting was mid-range.'

The funny this here is that they didn't do anything particularly special to achieve that sound, and that is actually the plugins chain I normally use for processing my vocals. 



Here is the link to the article:
http://www.soundonsound.com/people/robopop-producing-lana-del-reys-videogames


Friday 5 August 2016

AV Session at UH (B01)

Along with working on my own music, I've been trying lately to put together some recordings and video for a future showreel that will hopefully go onto the 'Portfolio' section of my website and maybe will help get me some work recording or filming other people.

All the video experience I had before was from shooting my own videos (but, of course, someone else was filming me) and later on, editing them in Final Cut X. I find editing video very satisfying and I usually have a good eye for choosing angles and making people look good on camera, so I thought it would be a good opportunity to make some videos for other people using the facilities at UH. After getting trained on the C100 Cannon cameras at UH, I was ready to go.

This week I had two AV sessions, one solo piano and one for a singer-songwriter (vocals and electric guitar). I recorded the audio using the Soundcraft Vi2 desk in B03 and left the recording going throughout the several takes while I was in the performance space, manning the cameras. I used two cameras at the same time, one static, one moving and changed angles after each take. The audio was not recorded to a click, so the editing will be a bit challenging, but I have done such editing before for my own songs, and even though matching the audio and video takes takes more effort, I found that it's worth the effort to retain the 'live' feel of the performance.

For the piano session I used 2 pairs of KM184s (condenser, cardioid) - one pair for the spots (one mic for the low strings, one for the high strings) and one pair in an ORTF configuration (on a stereo bar, 17cm distance between the mics, angled away from each other at about 110 degrees) to pick up a more distant, roomy sound.
















For the second session I used an SM57 on the guitar amp and also recorded the DI-ed signal. For the vocals I used an U87 and, as a special request from the artist, an SM58 (he wanted the recording to resemble his live performances in which he alternates between two mics - one of them having a lot of reverb on it to create a special effect). I resolved to only use the signal recorded through the U87 but add reverb on the relevant bits.