Thursday, July 30, 2009

Oh my God!

I started to read some cognitive speech stuff recently, there was this chapter talking about human's ear mechanics. It was OMG!

I thought it was just a sound wave (change in air pressure) on the outer ear, then the wave is being transferred to the ear drum, then to the oracle bones thingy and at last it reaches the cochlea. Through the nerves, the signal was brought to the brain. Then finished!

But now that i read something on that, only i know that the oracle bones serves as a signal normalizer, then the cochlea will somehow "filter" out the super sonic frequencies. Dog has different cochlea size, hence they hear a wider bandwidth. Then at the end of the cochlea, there is a hair thingy. There are up to 2000 hairs (if i'm not mistaken), that will convert the mechanical signal to electrical signal so that it can be send up to the brain.

And, one of the most interesting thing is, the speed of the electrical charge moving in the nerves is a lot slower than the speed of electron in a normal copper wire. But, how? How we manage to transmit a complex and up to micro-second signal? Scientist say that the 2000 hairs, each will take a different signal and reach the brain. When the brain reads it, it'll regroup them and that you can understand... OMG! Now i know!

How they did that? It's like 2000 cables running into a mixer and expect the mixer to automatically (programmed to be) put up nicely + "loseless" signal.

[Nerve system's limitation eg. For a normal person with height around 1.6m, there is a fraction of a second needed for the signal sent by the brain to reach his feet, so that the feet can react. Means, a lag. Hence, usually an organ player have to memorize the feet movement, because they have to move their feet to the right pedal and step on it before they can "hear" the sound. When they know they "hear" the sound and only after that they react, everything'll be too late! Interesting!]

"God CREATES that."

Ref: Music, Cognition and Computerized sound: An Intro to psychoacoustic-MIT press

Wednesday, July 29, 2009

It was a "chance"? or miracle?

After a very very very long thought, referring to a lot of data, I found that "Acoustic" involves a series of very complex waves interactions. It may involve sound decay, sound delay, reverberation, echo, effects of human neuron's limitations, etc, whether linearly or non-linearly.

A far voice may sounds "soft", but the "soft" is very different from the "soft" when we turn down the volume of the hi-fi system. Sound of far distance decay over distance, but the high frequency will decay faster than the low frequencies, due to the environment's nature. A far sound will sound "soft" in treble but "not-too-soft" in bass. Non-linearity! Room's acoustics changing the wave pattern, it become slightly more reverberant for a far voice. Etc, etc...

My church (before renovation) have a so-called "warm" sound. I tried very hard to mix the bass so that it'll be clearer those days, but i failed to. Until now, when i look back to what i was trying to solve, only i realize the whole picture! (i hope i'm right).

Different materials absorbs different frequencies, usually the hardest/most costly to be tamed is the bass freq. My church was having some partitions, carpets, cushions plus some ceiling which seem to absorb sound pretty well, but just the HIGH frequencies.

My church was having an under-powered amps driving the FOH speakers and for bass guitar. Yet, it was so warm. The answer could be, the room attenuated the mid and high range, leaving the bass untouched/build up its energy. Hence, while the midrange and treble (which is most sensitive to our ear) being normal (no energy build up acoustically), the bass was being build up causing an illusion of deep bass. It explains the "muddy" effects of the bass, which is much undesired. But, we like bass, bass gives energy. Try to listen to music by laptop speaker, the louder u make it, the noisier it'll become. It have to be balanced by bass.

Anyway, just wanna stress that the acoustic of the room is kinda... just nice to suit the under-powered amps, yet still sound so nice. We once removed a partition, the bass became a lot muddier, and it was since then i wanna get the bass sound right, nailed to its position, not floating around and kacau other instruments.. HAHA!. Quite a luck to get this kind of "just fine" or "it wasn't tuned!!?" environment, or i should say "Thank God" for knowing our budget..

The newly renovated church acoustic is a very challenging one, indeed! The prob now is how to equalise the system to suit our ears via room acoustics! Perhaps it'll not be a chance play this time (serious, lol!!).

Friday, July 10, 2009

Mixing. EQ

Equalizer (EQ)? You may find it on some mini hi-fi. The most basic one can be "tone" or "bass, treble, mid" adjustments. Hence, an equalizer serves as a device that changes the characteristic of a sound.

There are quite a number of them, from the smallest basic treble-bass that kind, to the 32 bands frequencies wide range graphic EQ, and to the most sophisticated parametric EQ.

Why change the sound characteristic? Isn't that original sound is the most important? The most expensive amplifier gets the most original sound, right?

Actually, in my personal view, original sound is important, but when playing in a band/orchestration, some frequencies can be eliminated to ensures the clarity of every instrument. For example, when u boost 100Hz for the kick drum, you may as well want to decrease the 100Hz of the bass guitar while boosting it at maybe 250Hz (its very own place in the wide frequency spectrum) . This is to ensure that they don't overshadow each other. Ensures clarity!

People usually likes to make a "smile" face on the graphic EQ on their home system, this is due to a psychoacoustic effect on human's brain. Our ear tends to accept and translate sound level in a non-linear scale. Meaning, while you hear the midrange frequency doubled its "loudness", it doesn't mean that your brain takes bass (same power of boost) as doubled loudness. Usually we are weaker at the bass and treble response at low volume, hence we tends to boost them up in everyday consumer hi-fi. But in live music where the SPL raise till a significant loudness, things become different. EQ is then used for the clarity and the feedback control. If used wrongly (if overboost), it may cause distortion. Distortion destroys speakers.

Still, original sound will be the most important thing.

Yet, how come... Perhaps a room having weak acoustic don't meant to be mixed nicely? But, when i play CD on it, it sounds just fine, just a little bit muddy/draggy.. How to mix in a semi mic in room? How to mix if there is some instrument that just doesn't need nor can be mic in? How to compensate? Any idea?

Monday, July 6, 2009

stereo stage sound

I wanna talk about sound stage.

A sound engineer or sound designer does a job we call it "mixing". What is mixing? What is its importance in music industry?

Basically, mixing is a process to achieve two goals: 1. balancing or making multiple instruments coexist each other (clarity is the up most important), 2. Presenting the song in a creative way (climax & resolve).

When the artist has finished with the recordings of multiple tracks by different instruments, he'll send them to the mixing studio, and the engineer will try to fix in every of them, deciding when and where to add or cut or etc...

Few parameters that can be played around with:

1. The pan (a.k.a left or right). It's pretty obvious that modern pop songs have this kind of characteristic. This is way too different from the olden days' mono speaker, whereby the sound came from only one speaker/source. It gives the horizontal parameter so that we can imagine the sound stage in 2D (instead than 1D of mono!) while enjoying the music. Separating out instruments makes them sound more independent, not mixed muddily.

2. The reverberation. Reverberation produces a psycho acoustic effect-depth of stage. When you hear a sound from a far distance and you compare it with the guy talking right in front of you, what's the different? The far one has a more reverberant sound compare to the one talking right in front of you. It's the same concept when you want to shift an instrument front up or back deep into the stage! Again, wider and deeper stage creates an image of 3D, not a bore "paper". Just be careful not to make it too muddy till it becomes too "viscous" and eat up all the space, you don't want your stage to become a cave, clarity! Psychoacoustic wise, reverberation changes a person's feel on vertical dimension of the source-in rise when reverb is added.

Another interesting thing to consider: When one pan the piano to left speaker, and guitar to right speaker, does it means stereo stage setting? No, it's not. It's dual mono. To make the piano a stereo stage instrument, one must add the piano's reverberant signal to the right speaker. Both speakers must be considered as a stage source from the same place . Meaning, the left speaker will act exactly like a piano (plays notes, pure notes) while the right speaker will act as the wall on the stage bouncing off piano notes (while room acoustics are all considered).

That's all for today! It's really fun!