Sonic Design: Lectures
26/9/2024 - 3/1/2025 (Week 1 - Week 14)
Seh Zi Qi/ 0355872/ Bachelor of Design(Honours) in Creative Media
Module Name: VSA60204 Sonic Design
Lectures
LIST
Public Holiday (Deepavali)
Independent Learning Week
Didn't Attend Class
Consultation (Lecture Recap of Week 11, but it's in Week 11)
Consultation
Consultation
Consultation
Mr Razif introduced us to the module and the exercises and assignments we'll
be doing.
There were different classes, and he explained that as well; The difference
between Sonic design and Spatial Audio design is that one goes into both
left and right ears while the other is more on surround sound.
He then showed us previous students' works for the module. After that, we
tried the equaliser exercises:
WEEK 2
_______________
Lecture Recap
Nature of Sound:
- A vibration of air molecules that stimulates our eardrums\
- Sound waves exist as variations of pressure in a medium such as air
- The source of the sound
Propagation:
- The medium where the sound travels
- Vibration of an object = The surrounding air to vibrate
- Sound captured and translated by our brain
- Vibrating air causes human eardrums to vibrate = brain interprets it as sound
The air molecule vibration and propagation(movement) are called sound waves
![]() |
| Fig 1.1, Nature of Sound |
Human Ear:
Outer Ear:
- External, visible portion of the ear and ear canal
- Sound waves enter the pinna, travel through the ear canal and then hit the eardrum
- Paper-thin eardrum
- Small, air-filled cavity containing 3 tiny bones (Malleus, Incus & Stapes)
- When the eardrum vibrates, it causes the 3 bones to amplify the sound vibrations and pass it into the inner ear
- Cochlea (Hearing Canal)
- Endolymphatic sac
- Semicircular canals
- Once passed through the 3 bones, it reached the cochlea. The hair cells inside the cochlea convert the vibrations into electrical signals which then get sent to the brain via auditory nerve.
- The study of the subjective human perspective of sounds
- May concern:
- Pitch
- Loudness
- Volume
- Timbre
Wavelength:
Distance between any points of the wave and the equivalent point on the next phase
Amplitude:
- The strength of a wave signal
- The height of a wave when viewed as a graph
- Higher amplitudes = higher volume
- The number of times the wavelength occurs in 1 second
- Measured in kilohertz (Khz)
- Faster vibrations = Higher frequency
- Interpreted as higher pitch
Pitch:
- Vibration per second = Frequency
- Less Vibration > Low Pitch > Low Frequency
- More Vibration > High Pitch > High Frequency
- Measured in Hertz
- Cycles per second
Loudness:
- The perceived volume or intensity of a sound
Timbre:
- The quality of a sound that distinguishes it from others
Perceived Duration:
- How long a sound seems to last, quick or slow
Envelope:
- How a sound evolves from start to finish
Spatialization:
- Location (direction or distance) of the sound
Equaliser:
Ear Training:
- 250Hz - "oo" sound
- 500Hz - "o" sound
- 1kHz - "ah" sound
- 2kHz - "a" sound
- 4kHz - "ee" sound
_______________
Tutorial
How to make a copy of the WAV file:
- Change the top nob to show the entire WAV file => double left-click the WAV file => Right click > Copy to new
Sample Rate:
- 48000 the OG
- 196000 for HD
Mono for one side
Stereo for both sides
Best to use 48000 Hz Mono, 16-bit
When removing the base, the sound/ voice sounds clearer
Decay time:
- Smaller room => decrease
- Bigger room => increase
WEEK 3
_______________
Lecture Recap
Basic Sound Designing Tools:
- Layering:
- Taking 2 or more sounds and placing them on top of each other
- Allows us to blend and mix various sounds into a new unique sound
Time stretching:
- Take a sound that plays at a certain length and sonically stretch the audio within set parameters without changing the pitch
- 10 seconds stretched into 20 seconds = the sound would be slower
- Changes in the pacing, tempo & speed of the audio but not the pitch
Pitch Shifting:
- Change the pitch of a sound without changing the actual length
- Higher = thinner, smaller & high pitch (e.g. Chipmunk voice)
- Lower = bigger, more bass & low pitch (e.g. Zombie/ Monster/ Evil Voice)
- Usually used in correlation to the appearance of the character
Reversing:
- Gives a weird & unnatural sound
- The key is to layer it!
Mouth it!:
- If you can't find it, mouth it!
- Our voice is very flexible and we can create a lot of sound with just a microphone & a lot of practice
- Once you're done with recording, can use the techniques mentioned earlier to fine-tune it
- Sometimes the best sounds happen accidentally
- Save and organise the sound effects library
- Add "constant power" to smoothly fade in or out the audio
- Be creative with sound effects
- Don't use the same sound effects all the time
- Set your volume to 60% to 70% while editing
WEEK 4
_______________
Lecture Recap
Sound in film usually goes unnoticed but is the unsung hero
What is Diegetic Sound vs Non-Diegetic Sound?
Diegetic:
- Derived from Diegesis
- The world of the film and everything in it.
- Everything that the character experiences
Non-Diegetic:
- Everything that only the audience can perceive
- Some elements can be non-diegetic like:
- Title cards
- Non-diegetic inserts
- Etc.
Acousmatic Zones:
- Covering the sounds we hear but can't see the source
- Offscreen:
- Sounds that belong to the diegesis (e.g. unseen birds chirping in a forest scene)
- Non-Diegetic:
- Characters can't hear because it exists outside of the world of the film (e.g. musical score)
Visualised Zone:
- The source of the sound is visible on screen
- If characters can hear it, it's diegetic. Example:
- Atmospheric
- Vehicles
- Weapons
- Music inside the film
- Dialogue
- Some form of voice-over
- If it's the character doing the voice-over, it is called Internal Diegetic Sound
- Internal Diegetic Sound:
- Sound coming from the mind of a character we can hear but the other characters cannot
- The primary role is to establish and create the world around the characters.
- Can also have a massive impact on the overall storytelling. Example:
- Sounds heard offscreen can inform the setting
- Expand the world beyond what we can see in the frame
- Can create suspense by having the audience hear it but can't see it
- Breaking the rules of diegetic sound can create shocking moments (e.g. expecting a big explosion but it only just silence)
- Can be manipulated for the audience to hear what the character is hearing
Non-Diegetic Sound:
- Everything the character can't hear is non-diegetic. Example:
- Sound effects
- Musical Score
- Forms of narration
- If the narrator doesn't play a role in the film, it's considered non-diegetic
- The traditional idea of verbal storytelling (e.g. reading the film like a novel)
- There's potential for breaking the illusion with non-diegetic narration (e.g. narration may be breaking the 4th wall) usually done for creative purposes
- Can be used to:
- Enhance motion and movement
- For comedy to put a punchline on a joke
- Musical score (again)
- Plays a massive role in the film experience
- Stories wouldn't be as emotional or as dramatic
Trans-Diegetic Sound:
- Used most to subvert audience expectations
- This is done a lot with music
- Sometimes even gets the audience in on the joke
- Non-diegetic > Diegetic (e.g. The music score playing in the BG suddenly changes to the radio)
- Diegetic > Non-diegetic (e.g. Used in montages)
- Helps to blur the lines between fantasy and reality
Creative Exceptions:
- When sounds don't fit into the different categories stated above
- Used mostly in 4th wall breaks.
_______________
Tutorial
|
|
| Fig 4.1, Volume & Stereo Icons |
- Left Icon (Audio Volume): Changes the volume of the selected sound
- Right Icon (Stereo/ Panning): Changes the direction of the selected sound
- It's ok to do it on speakers, if it's on a headset it'll be weird
|
|
| Fig 4.2, Expanding 'Read' |
- Click on the line to get a keyframe
- Move up and down to change direction [Up: Left, Down: Right]
WEEK 5
_______________
Lecture Recap
There were no lectures for this week so we went straight into the tutorial section.
_______________
Tutorial
Can use a colour code to know which sound is which
- Tracks must not clip into each other, if not it's not usable
- Use a limiter to compress the audio; can set a threshold so it doesn't go too loud.
- Can't go above 0 so use -0.1 to still get a loud noise
- Only use the effects when needed, not
- Mix > Reroute to the fast track, grouping the track makes it easier, remember to do the individual soundtracks first to adjust the volume to your liking
WEEK 7
_______________
Lecture Recap
- Reduce specific noise from audio for better quality
- Background sounds are needed
- When cutting sounds, cut it at the intersecting point of the waveform and the middle line
- When needing specific audio cues, you can overlay the same sound to create a continuous track & it can sound really smooth when done right.
WEEK 9
_______________
Lecture Recap
Microphones:
- Dynamic:
- Tough
- Durable
- Stage & Live use
- It has a coil to catch the noise
- Condenser:
- [Only use when the room has good noise reduction]
- Very sensitive
- Very fragile
- Studio Use
- Has 2 very fragile plates => Better not to drop it because it's expensive
- Needs to be powered => Uses +48 volts known as phantom power
- Shotgun/ Directional Microphone:
- Catches the audio where the microphone is pointed
- May be hard to use since the character's mouth moves => point at the chest for best quality
Patterns:
- Omnidirectional
- Captures everything/ surround sound
- Cardioid
- Biggest one
- Heart-shaped
- Captures more at the front
- Hypercardioid
- Similar to Figure Eight:
- Captures evenly front and back
- Easier to use different microphones in an interview
- Mostly used in duo singing
Microphones with more pickup patterns will be more expensive.
Cable & Connectors:
- XLR jack
- Unbalanced & Balanced
Proximity Effect:
- Recording closer & further may affect the sound quality
- As a sound source moves closer to a directional microphone an increase in response to low frequencies can occur
- Many mics include a bass roll-off switch to counter this effect
- The proximity effect doesn't affect Omni-directional mics
How to create an environmental sound:
- Reduce as much as you can
- Find an area that has a significant amount of noise that is similar (longer stable noise is better)
- Record the room noise before actually recording the audio
How to remove most/ all noises:
- Capture Noise Print > Say yes
- Effects > Noise Reduction/ Restoration > Noise Reduction (process)
- Look for anomalies (lip smack, breathing) => Have to reduce it manually
Reduce loud noise:
- Find the wave that intersects with the red line
- Some words have more frequencies
Rack Effect Dynamics:
- Reduce the knob till the noise doesn't cut off
Compressor: Everything sounds more consistent, though it makes the audio
sound more aggressive
Release: Shows
Makeup: How much you want to bring everything up
Makeup: How much you want to bring everything up
Rack Effect DeEsser: Pushing the threshold
WEEK 11
_______________
Lecture Recap
Sound for Film(Recap):
- Film has specific screens, animatic/ storyboard
- The description must be descriptive
- May have sections like Music. SFX, Audio Description for a precise scene guidance
Linear:
- We know:
- When
- What kind of sound
- How loud
- How soft
- What we see is what we get/ want to produce
- Cue Stopping
Games:
- Games are NOT linear
- Things happen based on a player's
- States
- Choice
- Action
- Can predict what will happen
Event Mapping:
- Determine all possible actions that would require sound or change in sound state
- Often called "Triggers", "Cues" or "Events"
- Triggers
- Categorise the sounds for easier organisation
Can use pitch shift to make it sound higher or lower
Foley:
- Has to be live
- Key is to sync with the film shown
- Jack Foley was the one who started it
- Using different props to get the perfect sound
Voice & Sound Effects
Record more than you need, just in case

Comments
Post a Comment