Special Topics PHP with MySQL
Special Topics PHP with MySQL CSC 191
Popular in Course
Popular in ComputerScienence
This 38 page Class Notes was uploaded by Melyssa Aufderhar on Wednesday October 28, 2015. The Class Notes belongs to CSC 191 at Wake Forest University taught by Jennifer Burg in Fall. Since its upload, it has received 50 views. For similar materials see /class/230719/csc-191-wake-forest-university in ComputerScienence at Wake Forest University.
Reviews for Special Topics PHP with MySQL
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/28/15
Excerpt from Chapter 4 of The Science of Digital Media by Jennifer Burg 41 Quantization and Quantization Error 411 Decibels and Dynamic Range As you have seen in the previous section sampling rate relates directly to the frequency of a wave Quantization on the other hand relates more closely to the amplitude of a sound wave Amplitude measures the intensity of the sound and is related to its perceived loudness It can be measured with a variety of units including voltages newtonsmz or the unitless measure called decibels To understand decibels it helps to consider rst how amplitude can be measured in terms of air pressure In Chapter 1 we described how a vibrating object pushes molecules closer together creating changes in air pressure Since this movement is the basis of sound it makes sense to measure the loudness of a sound in terms of air pressure changes Atmospheric pressure is customarily measured in pascals newtonsmeterz abbreviated Pa or Nmz The average atmospheric pressure at sea level is approximately 105 Nmz For sound waves air pressure amplitude is de ned as the average deviation from normal background atmospheric air pressure For example the threshold of human hearing for a 1000 Hz sound wave varies from the normal background atmospheric air pressure by 210395 Nmz so this is its pressure amplitude Measuring sound in terms of pressure amplitude is intuitively easy to understand but in practice decibels are a more common and in many ways a more convenient way to measure sound amplitude Decibels can be used to measure many things in physics optics electronics and signal processing A decibel is not an absolute unit of measurement A decibel is always based upon some agreedupon reference point and the reference point varies according to the phenomenon being measured In networks for example decibels can be used to measure the attenuation of a signal across the transmission medium The reference point is the strength of the original signal and decibels describe how much of the signal is lost relative to its original strength For sound the reference point is the air pressure amplitude for the threshold of hearing A decibel in the context of sound pressure level is called decibelssoundpressure level dBSPL Let E be the pressure amplitude of the sound being measured and E g be the sound pressure level of the threshold of hearing Then decibelssoundpressurelevel dBSPL is defined as 0 dB iSPL 2010gw Often this is abbreviated simply as 613 but since decibels are always relative it is helpful to indicate the reference point if the context is not clear With this use of decibels E g the threshold of hearing is the point of comparison for the sound being measured key equation Given a value for the air pressure amplitude you can compute the amplitude of sound in decibels with the equation above For example what would be the amplitude of the audio threshold of pain given as 30 Nmz 2 dB iSPL 2010g10rMi 2010g101500000 20 617 m 123 000002Nm2 Thus 30 Nmz the threshold of pain is approximately equal to 123 decibels The threshold of pain varies with frequency and with individual perception You can also compute the pressure amplitude given the decibels For example what would be the pressure amplitude of normal conversation given as 60 dB x 60 2010 g10000002Nm2 Z 60 2010g1050000xm N 2 m 3 10g10 5000M 2 103 50000xm N 1000 A 50000 m2 x 00212 W Thus 60 dB is approximately equal to 002 Nmz Decibels can also be used to describe sound intensity as opposed to sound pressure amplitude Decibelssound intensigylevel dBSIL is de ned as 613 7 SPL 1010g10 10 is the intensity of sound at the threshold of hearing given 0 as 103912 Wmz W is watts It is sometimes more convenient to work with intensity decibels rather than pressure amplitude decibels but essentially the two give the same information The relationship between the two lies in the relationship between pressure potential in volts and intensity power in watts In this respect I is proportional to the square of E discussed in Chapter 1 Decibelssoundpressurelevel are an appropriate unit for measuring sound because the values increase logarithmically rather than linearly This is a better match for the way humans perceive sound For example a voice at normal conversation level could be 100 times the air pressure amplitude of a soft whisper but to human perception it seems only about 16 times louder Decibels are scaled to account for the nonlinear nature of human sound perception Table 41 gives the decibels of some common sounds The values in Table 41 vary with the frequency of the sound and with individual hearing ability common Experimentally it has been determined that if you increase the amplitude of an audio recording by 10 dB it will sound about twice as loud Of course these perceived differences are subjective For most humans a 3 dB change in amplitude is the smallest perceptible change While an insuf cient sampling rate can lead to aliasing an insuf cient bit depth can create distortion also referred to as quantization noise In Chapter 1 we showed that signaltoquantizationnoiseratio SQNR is de ned as SQNR 20 log10 2quot where n is the bit depth of a digital le This can be applied to digital sound and related to the concept of dynamic range Dynamic range is the ratio between the smallest nonzero value which is l and the largest which is 2quot For an nbit le the ratio expressed in 2n decibels is then 2010g10 20n10g102 Thus the de nition is identical to the de nition of SQNR and this is why you see the terms SQNR and dynamic range sometimes used interchangeably We can simplify 201110g10 2 even further by taking log10 2 and multiplying by key 20 equation Let n be the bit depth of a digital audio le Then the dynamic range of the andio le d in decibels is de ned as d 20nlog10 2 m 6n As a rule of thumb you can estimate that an nbit digital audio le has a dynamic range or equivalently a signaltonoiseratio of 6n dB For example a 16bit digital audio le has a dynamic range of about 96 dB while an 8bit digital audio le has a range of about 48 dB Be careful not to interpret this to mean that a 16bit le allows louder amplitudes than an 8bit le Rather dynamic range gives you a measure of the range of amplitudes that can be captured relative to the loss of delity compared to the original sound Dynamic range is a relative measurementithe relative difference between the loudest and softest parts representable in a digital audio le as a function of the bit depth There is a second way in which the term dynamic range is used We ve de ned it as it applies to any le of a given bit depth The term can also be applied to a particular audio piece not related to bit depth In this usage you don t even have to be talking about digital audio A particular piece of music can be said to have a wide dynamic range if there s a big difference between the loudest and softest parts of the piece Symphonic classical music typically has a wide dynamic range Elevator music is produced so that it doesn t have a wide dynamic range and can lie in the background unobtrusively Let s return to the term decibels now You ll nd another variation of decibels when you use audio processing programs where you may have the option of choosing the units for sample values Units are shown on the vertical axes in the waveforms of Error Reference source not found On the left we ve chosen the sample units view On the right we ve chosen the decibels view However the decibels being displayed are decibels tll scale dBF S rather than the decibelssoundpressurelevel defined above The idea behind dBFS is that it makes sense to use the maximum possible amplitude as a fixed reference point and move down from there There exists some maximum audio amplitude that can be generated by the system on which the audio processing program is being run Because this maximum is a function of the system and not of a particular audio file it is the same for all files and does not vary with bit depth This maximum is given the value 0 When you look at a waveform with amplitude given in dBFS the horizontal center of the waveform is oo dBFS and above and below this axis the values progress to the maximum of 0 dBFS This is shown in the window on the right in Figure 410 The bit depth of each audio file determines how much lower you can go below the maximum amplitude before the sample value is reduced to 0 This is the basis for the definition of dBFS which measures amplitude values relative to the maximum possible value For nbit samples dBFS is defined as follows Let x be an nbit audio sample in the range of 239 1 S n S 239 1 1 Then x s value expressed as decibels fullscale dBFS is 2nil dBFS2010g M J 10 Try the definition of dBFS on a number of values using 71 16 You ll find that a sample value of 732768 maps to 0 the maximum amplitude possible for the system 10000 maps to 7103 1 maps to 7903 and 05 maps to 7963296 These values are consistent with what you learned about dynamic range A 16bit audio file has a dynamic range of about 96 decibels Any samples that map to decibel values that are more than 96 decibels below the maximum possible amplitude effectively are lost as silence key equation Figure 410 Measuring amplitude in saples or decibels from Audition 7 Units are samples in left window decibels in right window CSC 191E Digital Sound for Music and Theatre Spring 2008 Burg Impedance and OutputInput Connections for Digital Audio Recording Impedance abbreviated Z is the measure of the total opposition to current ow in an alternating current circuit Measured in ohms impedance is the sum of two components resistance R and reactance X Ohms is abbreviated Q 1000 ohms is abbreviated kQ and 1000000 ohms is abbreviated MQ The source of an audio signal 7 like a microphone or electric guitar 7 is called the source The place where you plug this source into a sound card external sound interface or mixer is called the load or input The signi cance of impedance is that it affects what types of microphones and instruments should be plugged into what kinds of sound cards and mixers A microphone up to about 600 Q is low impedance between 600 and 10000 Q is medium impedance and 10000 Q or more is high impedance Most good mics are low Z particularly those with XLR connectors An electric guitar generally is a highimpedance source1 You can check the impedance of your outputs and inputs by reading the device speci cations A high impedance mic or instrument will generally output a higher amplitude signal measured in voltage than a lower impedance one However don t be fooled into thinking that high impedance is necessarily better than low impedance In the case of microphones most good mics are low impedance like the Shure SM58 an excellent and durable dynamic mic for voice recording Figure 1 Shure SM58 microphone acmal size about 6 2 long 39 L L quot 39 p circuit 39 4 39 39 Ifyourguitarrequires batteries then itprobably 39 39 39 39 39 u uau providealow Imp edanc e output So how do you know what to plug into to what when you want to hook up a microphone andor an instrument to a sound card external sound interface or mixer in order to make a digital recording To some extent it helps to know the impedance of the output and input To some extent you can simply be guided by the shapes of the input and output jacks Should the impedance of the output and input match The impedance of output and input don t have to match exactly In general the audio output e g the mic should have lower impedance than the input the place you plug the mic in Connecting a lower impedance output to a higher input is called bridging Consider the Shure SM58 mic which you ll use in the lab It has male XLR connectors at the end You plug these into a cable that has female XLR connectors at one end and male at the other The male end of the cable can be plugged into the XLR connectors of a sound interface like the MAudio Firewire 410 or MAudio FastTrack USB we have in the lab The MAudio input has an impedance of about 1500 Q The SM58 mic has an impedance of about 150 to 300 Q The output is lower impedance than the input by a good amount but not too much as you ll see below It s not good to connect a higher impedance output to a lower impedance input because the frequencies that are recorded may be distorted For example you should not connect a high impedance electric guitar to a low impedance input jack What if you want to connect the Shure SM58 which has XLR connectors to the internal SoundBlaster sound card of one of the computers in the digital media lab 7 a sound card which doesn t provide XLR type input but has 1A TS type inputs instead You can see such an input pictured below The 1A input is indicated with a red arrow TS inputs are generally high impedance inputs even higher than XLR inputs The inputs shown in the picture below are line level with very high impedance 7 between 10 k9 and 1 MS gure 2 SoundBlaster Xi Platinum Sound Card If you were to connect a lowZ mic into a very highZ input like this line level input you would get a weak signal This is because the highZ input is designed to receive a relatively high voltage from a highZ mic and so the input signal is not ampli ed much Well short of soldering the connection you wouldn t be able to plug the SM58 microphone into the VA jack anyway because the connections don t t together That clues you in right away that you need an adapter The solution is to put a matching adapter between the mic cable and the input jack gure 3 XLR to TS adapter actual size about 4 VA long The adapter transforms the input from lowZ balanced to hiZ unbalanced It has female XLR connectors on one end and a TS connection on the other end You connect the mic cable to the XLR side and plug the TS connection also called a phone jack into the sound card What else should you know about impedance Another disadvantage of a high impedance output that it isn t good when you have to connect the output to the input by means of a long cable With a long cable there s more chance that high frequencies will be lost in the audio signal This is because high frequencies are affected more by reactance Also a longer cable provides more chance for the high impedance output to attract electrical interference on its way to the input This can add noise to the recording in the form of a hum A lowZ mic can be used with hundreds of feet of cable without picking up hum or losing high frequencies A mediumZ mic cable is limited to about 40 feet and a highZ mic is limited to about 10 feet The extent to which high frequencies are lost depends also on the capacitance of the cable How do you connect an electric guitar to a digital audio recording input We have three MAudio FastTack USB external sound interfaces that have 1A TRS inputs for guitars You can plug an electric guitar directly into this input The one MAudio Firewire 410 that we have allows you has a preamp at the micinst input You can plug an electric guitar directly into this input csc 191E Digital Sound ior Music and Theatre Spr39 Sem39ng Up Hardware and Soitware Preierences Beiore Recording Making deiault settings ior your computer and im sound card when you record something on your computer you have to specify the input one place to do this in the Control Panel ofyour computer This is where you specify your default p i i A an o as quot 39 f 39 input and output For exampl on aThinkPad R60 under windows x1 you can go to StartSettingsContro1 Panel and doubleclick on Sounds and Audio Devices You might have a speak 39 er icon on your bottom task bar too lfso you can doubleclick this This opens awindow ounds an a 31 WW was I Adam I Vote Hamel Q Surnametrio J W High Mule Iquot Place volume man in lhe Aaskhal Adxanced Speakerseomrs he he eons now or change mama 1 speakewolumeandolhevseumgs veakelvmume Agvanced came AWU Notice that you re seeing the settings for the default sound card which is SoundMAX HID Au io Ifyou want to change the default sound card for input or output you needto click on the Audio tab Sounds and AL ho Devices 5 v we some Anon Vote new de quotum HD ennui yohme Advanced Sound Vecmdmg K Dgrauit device SaundMAX HD Audm Vghme Advanged Mini music pieyhee k Dezeuu device 4 55 Mieioson as Waveleh e sw Symh Vamme Ahout r useonhdereuudewees em out My A computer SoundMax HD Audio the MAudio Firewire 410 extemal sound int nee the Digidesign MBox 2 and Bluetooth wireless You can see your sound card options en you click the dro down man from Default device This is how you can select whatever default input output and MIDI devices uh ih erf Mini music D ayhack D ezsuit device 3 17 Micvasa as Wavetahie sw Synth v Vamme AEaul l gse anLv dereuit devices We Anny Ifyou then click nn the youseethis 39 quot quot J ii Master Volume ID A Ogtiuns e p MastevVa ume Wave SW Synth ED P ayev Ba ance Ba ance Baiance Baiance o J m o J mo J mo J w Voune Voune l39 Mme au F Mme l Mme l Mme Advanced SDundMAX HD Audiu This is where you set the volume for different output devices To choose your input for recording 39 39 clickthe quot A 39 quot the volume controls are checked in the lower box properties g mixev l smdw HD Audio 3 39I came Then click OK Now you can see di erent types ofinput Notice that the microphone is L 39 A L 39 niechosen A 439 L39 Sou dMaxHID Audio ail Recording Control eig Ogtlnns Help seem mimic come Balance Balance Balance ugt ltn ugt ltn ugt lt9 Wine Wine Wine elecl elecl SnundMAX HD Au lD ac sic tacit IfI have the MAudio FW external sound inteiface connected to my computer 1 can make this my default device for input output and MIDI and then the settings windows look a little di erent J M Audla Fw Mn V2 ycldne Advanced Saund ieccidind Deladll device MAddurwtlnlz volume Advanged Mlol NAAle playback Deladll device 4359 MAudlonManDl v Valume Akjwl r sean ydelauhdevlces cancel Apply Notice that under Sound recording the Volume option is now grayed out meaning that Ican t change the volume from here This is because the MAudio FW has i s own inteiface for volume settings Ican get to this by going to StanSettingsControl Panel and then clicking on the MAudio icon or le name Then this window opens mm mm FIREWIHE Hm szrn swlmrln wSCEnn swnxnn quotmumquot quot quot quot9 L mum mum mum m auxR mum W m tab you see this n w 0le w n 55 mrl m and 5 an m my am a m in ii a can can HE E IE mm sn w am y 2 a y a W mmm bus Ifyou click the hardware tab you see this amph ralrdrlr nd mml m a o lniulmlrcnaxlal Emcummw asst mm o cxmmmrupuul 9 nslolwum nuttersas III E 255 ww39 Onxmmal coaxml mounanagssiungs u insaan asla almumuullalwg Consult the User s Manual for details on these settings Overriding derault settings or the sound card ironi within a digital audio or MIDI processing program u can override the default settings when you enter apanicular digital audio or MIDI processing program like Audition Music Creator Reason etc In Audacity go to EditPreferences latity Preferences Auun VD nusuy we meaisl soeunuml oneeiuiesl nienseel teyuuuul unusel Devi2 HuudiDF HUHHN Devi2 MAudiaFWMEIMuHi hsnneis i Mona l Pisy othei hacks white vecmdmg new one I Sonwsie Piayihiough why new hack white ieooidmg in cane This is where you can set your input and output devices for the Audaeity session In Adobe Audition you go to EditAudio Hardwe Setup Audition give you two di erent my to look at your sound in the dit View or in the Multitrack View Below y u see where you set the input and output for the Audition s Edit View ALI 0 Hardware Setup when luutnsewew I minimum oersuit input uis FW MEI Ana aq In i v oersuit Output ms FW MEI Ana aq cut 12 L v Ifyou click the Multitmk Viewtab you can set audio driver for that view A I0 Hardware Set Edit View MutitvathEW 5unuunu Enmdev Audia Dvwev Menuuu FW A510 o v v Meuuuu FW A510 Output Stev ms FW MEI Anam1 Out 12 L ms FW MEI Anam1 Out 7u L has FW MEI SPDIF Out L um DIMFW MEI Anam1 Out lZL A DZM FW MEI Anam1 Out 12 R DEM FW MEI Anam1 Out 34 L CHM FW MEI Anam1 Out 34 R DEM FW MEI Anam1 Out 56 L Lune5R L i oeraut hut PW tin mug in i LI Defaut Output DIS FW MEI Anam1 Out 12 L v 5 9 Hep Once you ve set the audio driver for the view you can select which input and output ports you mm for eaeh track Different tmks can have different input and output ports kWh Imp gt 4 m V VI mu 7 i a u 39 b 4 v gt 4 mwwgmmm gm a mm a mm maow lcaaamwusiquot Dc 9 as all EA M MAL M m CSC 191E Digital Sound for Music and Theatre Spring 2008 Burg Sound Cards and External Sound Interfaces Your computer is equipped with a sound card The sound card converts the analog signal from a microphone to a digitallyrecorded sound le The process that does this is called digitization or analog to digital conversion ADC A software program provides the interface between you and the sound card For example you can use Audacity on your laptop or Audition on the computers in the Digital Media Lab to create a recording Sound Forge Logic Acid Pro Reason Cakewalk Music Creator and Pro Tools are other possibilities The sound card on the ThinkPad R60 is a SoundMax Integrated Digital HD Audio You access it by means of a 18 microphone or speaker jack on the side of the computer The input jack mic is red and output jack speakers or headphone is green The ThinkPad R60 also has an internal mic and speakers so you don t really have to hook up anything externally You can just talk into the computer when you record but you won t get good quality because you ll pick up all the background noise If the sound card that comes standard with your computer isn t a very good one you can buy and install a better one internally or buy an additional external sound interface that contains an ADC and extra inputoutput jacks We have three MAudio FastTrack USB external sound interfaces httpwww J quot FastTrackI ISB You can borrow one ofthese along with a microphone and take it to wherever you want to do your recording You can attach both a microphone and an electric guitar to this sound interface The User s Manual is at httpwwwm andin mm ima es 39 39 39 39 060508 FastTrack UG EN01pdf TRACK guitar mic recording interface Figure 2 Mr udm r zsu rack UbB mm view Figure 3 MrAudiu Fzsrrrzck USB Rear View L um M 4 r n t wwwsweetwatercomstoredetailFirewire410 sound interface Figure 4 MrAudiu Firewire 410 Figure 6 MrAudiu Firewire 410 back view You can see from the three pictures above that this sound interface has input and output ports of different types In the front View you can see two XLR type inputs on the left You can connect a Shure SM58 microphone here The User s Manual for the MAudio Firewire 410 is at hnp wwwm A DO NOT HOT BOOT THIS DEVICE Turn off your computer before connecting the MAudio sound interface Then make the connections as shown below and turn on the c omputer Note that if you connect the MAudio with a 6pin to 6pin Firewire connection e Firewire supplies suf cient power to the MAu io FW and you don t need external power However ifyou connect the MAudio FW410 to a laptop that has only a 4pin r yyoulllL Ir Also note that both the MicLine and Pad buttons should be in the out position for the XLR input that you re using manitnrs 39 m umpufer mizrophone hEadphnnes instrument gure 7 Setting up the MiAudio FW 410 Connecting to SoundBlaster XFi Platinum internal sound card One of the computers in the Digital Media Lab has a SoundBlaster XFi Platian internal sound card This is the not the standard sound card that came With the computer but an extra one that we bought You can connect the Shure SM58 microphone to this sound card by connecting the mic cable to the adapter Then plug the adapter into the place indicated With the red arrow in Figure 9 gure 8 XLR to TS adapter actual size about 4 VA long gure 9 SoundBlaster Xi Pla num sound card CSC 191E Digital Sound Production for Music and Theatre Spring 2008 Burg Digital Audio and MIDI Compared There are two basic ways in which sound is stored in a computer as digital audio and as MIDI Digital Audio Sound is produced by vibrations that cause air pressure to change over time Mathematically it can be represented as a function over time and graphed as a waveform Amy is Em Mew Buyer genevate Effegt Analvze new in m M m x 35 35 au 2 a 12 5 u 30E 39 39 39 39 39 39 39 Ail wanmrww M l lELVZl m m m AHANVA VArAYAVAVlvn n 1quot The word boo recorded in Audacity The amplitude of the wave corresponds to the loudness of the sound It is customarily measured in decibels The frequency of the wave corresponds to the pitch of the sound Frequency is measured in Hertz One Hertz is one cyclesecond One kilohertz abbreviated kHz is 1000 cycles per second One megahertz abbreviated MHz is 1000000 cyclessecond Sound wave in blue two cycles shown first cycle ending at the red vertical line Sound in digital audio format is stored as a sequence of numbers representing the amplitude of air pressure as it varies over time Sound is converted to digital audio format by sampling and quantization When you attach a microphone to your sound card and create a digital recording using a program like Audacity or Music Creator as the interface the analog to digital converter ADC in your sound card is doing this process of sampling and quantization The mic detects the changing air pressure amplitude communicating this information to ADC at evenly spaced points in time This is the sampling process The sound card quantizes these values and sends them to the computer to be stored When the sound file is played the reverse process happens The sound card has a digital to analog converter DAC that converts the digital samples back to continuously varying air pressure a form that can be converted vibrations in the air 7 sound When you make a digital recording the sampling rate must be at least twice the frequency of the highestfrequency component in the sound Otherwise the recording you make won t have the true frequencies in it so it won t sound exactly like what you were trying to record This is called aliasing Since the highest frequency that humans can hear is about 20000 Hz CD quality sampling rate is set at 44100 samplesseconds Samplessecond is also abbreviated as Hertz so CD quality digital audio is sampled at 441 kHz When samples of the air pressure amplitude of a sound are taken they are stored in a computer as a binary number Binary numbers 7 base 2 that is consist of bits each of which can have a value of 0 or 1 Each number in a computer is contained in a certain number of bits This is called the bit depth The bit depth per audio sample puts a limit on the number of values that can be represented If two bits are used then four values can be represented 11 If three bits are used eight values can be represented 110 In general if b bits are used 2b values can be represented Each value is used for to represent one air pressure amplitude level between a minimum and maximum Thus the larger the bit depth the more precisely you can represent air pressure amplitude If you have more bits you don t have to round values up or down so much to the nearest allowable value The bit depth for CDquality digital audio is 16 bits per channel with two stereo channels for each sample A byte is equal to eight bits so that s four bytes per sample for a stereo recording and two bytes per sample for a mono recording When you record digital audio you ll need to choose the sampling rate and bit depth CDquality is probably ne for your purposes so a sampling rate of 441 kHz and bit depth of 16 bits per sample is good You don t need to record in stereo You can create stereo channels later in the editing if you want to Recording in mono is ne A digital audio recording captures exactly the sound that is being transmitted at the moment the sound is made With a high enough sampling rate and bit depth the resulting recording can have great delity to the original For example when a singer is being recorded all the nuances of the performance are captured 7the breathing characteristic resonance of the voice stylistic performance of the song subtle shifts in timing and so forth This is one of the advantages of digital audio over MIDI A disadvantage of digital audio is that it results in a large le You can easily do the arithmetic If you have 44100 samplessecond four bytes per sample a one minute recording and 60 seconds in each minute how many bytes of data do you get for a stereo digital recording 44100 y39 39 4 1 r39 60 39 39 1 min 7 10584000 bytesmin That s about 10 megabytes per minute Because digital audio results in such big les it is usually compressed before distribution Often you keep it in uncompressed form while you are working on it and then compress it at the end to a format like mp3 If you want to import an uncompressed audio le into your project you can use the wav format In summary digital audio is stored as a sequence of numbers each representing the air pressure amplitude of a sound wave at a certain moment in time In CDquality audio each number is stored in two bytes with two parallel sequences of numbers one for each of the two stereo channels There are 44100 of these numbers per second for each of the two stereo channels This creates a big le which is why digital audio is compressed for distribution MIDI MIDI stands for Musical Instrument Digital Interface It is another way in which sound can be stored and communicated in a computer The recording encoding and playing of MIDI is done by the interaction of three basic components 0 a MIDI input device 7 often an electronic keyboard or other MIDIenabled instrument 7 which you attach to a computer 0 a MIDI sequencer 7 often a piece of software like Cakewalk Music Creator Logic or Pro Tools 7 that receives and records the message sent by the MIDI input device 0 and a MIDI synthesizer 0r sampler 7 e g the sound card of your computer or a software synthesizer bundled with a MIDI sequencer 7 that knows how to interpret the MIDI messages and convert them into sound waves that can be played A MIDI le does not consist of audio samples MIDI uses a different method of encoding sound and music When you play a note 7 say middle C 7 on a musical keyboard attached to a computer that is running a MIDI sequencer the keyboard sends a message that says essentially Note On C velocity v When you lift your nger a Note off message is then sent Your MIDI keyboard doesn t even have to make a sound It s just engineered to send a message to the receiving device which in this case is your computer and the sequencer on it e g Music Creator This is different from hooking a microphone up to the computer holding the mic close to a music keyboard and recording yourself playing the note C as digital audio In this case your keyboard must make a sound in order for anything to be recorded When you record digital audio the microphone and ADC in your sound card are sampling and quantizing the changing air pressure amplitude caused by your striking the note C A sequences of samples is stored that sequence lasting however long you make the recording If it s one second it will be 44100 samples stored as 176400 bytes In comparison the MIDI message for playing the note C and then releasing it requires only four bytes There are also messages that say what type of instrument you want to hear when the file is played back When the MIDI is played back it doesn t have to sound like a piano or an electronic keyboard You can say in the file that you want to hear a clarinet ute or any other of the 128 standard MIDI instruments There can be even more but 128 instruments are always available in standard MIDI Each instrument is called a patch Indicating in a MIDI file that you want a different instrument at some point is called a patch change A whole set of 128 instruments is called a bank When digital audio is played the changing air pressure amplitudes that were recorded are reproduced by the playing device so that we hear the sound MIDI is different because no air pressure amplitudes are recorded In the case of MIDI a synthesizer must know how to create a note C that sounds like the instrument you specified in the file The synthesizer could be a hardware or software device There are a number of places where MIDI sounds can be synthesized o the sound card of your computer 0 a software synthesizer provided by the operating system of your computer like Microsoft GS Wavetable SW Synth o a software synthesizer such as the Cakewalk TTSl provided by Music Creator or the many synthesizers provided by Reason The synthesizer converts the MIDI messages into a waveform representation of sound and sends it to the sound card When the music is played the DAC in the sound card converts the digitallyencoded waveforms into continuously varying air pressure amplitudes 7 vibrations 7 that cause the sound to be played A MIDI synthesizer can convert MIDI messages into encoded sound waves in one of two basic ways The sound waves can be created by mathematical synthesis using operations that combine sine waves in ways that have been determined to create the desired pitches and timbres of instruments Alternatively rather than creating the waves by mathematics a synthesizer can simply look up sound clips that have been stored in its memory bank Some people make a distinction between these two methods They call something a synthesizer if it uses mathematical synthesis and a sampler if it reads from a memory bank of sound samples Some people use the term synthesizer in either case In this course we ll use the terms interchangeably For a more detailed discussion of the different types of MIDI synthesis see httpenwikinedia mg wiii S quot 39 Because MIDI sounds are synthesized or read from stored samples the quality of the sound that you end up with is dependent on the quality of your synthesizer The sound card that comes with your computer may or may not give you good MIDI sound Different software synthesizers offer a different number of sounds with different qualities This is why it s good to experiment with different synthesizers if you have them available like the ones in Music Creator and Reason Another disadvantage of MIDIcreated music is that it lacks the individuality of digitally recorded individual performances However you can compensate for this lack of individuality somewhat by altering the MIDI sounds with filters pitch bends and so forth An advantage of MIDI is the ease with which you can make changes and corrections One quick patch change 7 a click of the mouse 7 can change your sound from a piano to a violin The key or tempo in which a piece is played can be changed just as easily These changes are nondestructive in that you can set things back to their original state any time you want to If you play music on a keyboard and record it as MIDI and you make a few mistakes you can go in and change those notes individually which is impossible in digital audio because the notes aren t stored as separate entities Mixing Digital Audio and MIDI You can combine digital audio and MIDI in a music production if you have a multitrack editor that handles both 7 e g Music Creator Logic or Pro Tools Each track is designated as either a MIDI track or an audio track You can designate different inputs and outputs to each track You can combine audio and MIDI tracks in a wide variety of ways For example you can record drums on one track a violin on another and piano on another all in MIDI Then you might want to record one voice on one audio track and another voice on a second audio track All these recordings can be done at the same time or at different times depending on the power of your computer processor to handle simultaneous multi track recording You can record one voice on one track and then play that track while you record a second voice in harmony The advantage of having different instruments and voices on different tracks is that you can then edit them separately without one interfering with another The adjustments that you want to make to one voice or instrument may not be the same as the adjustments you want to make for another You can also separate tracks into different output channels a way to create stereo separation When you re done with all your editing you miX everything down to digital audio and compress it for distribution However you should keep a copy of the multitrack file in the file format of your sequencer e g cwp for Music Creator That way you can edit it more later if you want to Excerpt from Chapter 5 of The Science of Digital Media by Jennifer Burg 52 Dynamics Processing Dynamics processing is the process of adjusting the dynamic range of an audio selection either to reduce or to increase the difference between the loudest and softest passages An increase in amplitude is called gain or boost A decrease in amplitude is called attenuation or informally a cut Dynamics processing can be done at different stages while audio is being prepared and by a variety of methods The maximum amplitude can be limited with a hardware device during initial recording gain can be adjusted manually in realtime with analog dials and hardware compressors and expanders can be applied after recording In music production vocals and instruments can be recorded at different times each on its own track and each track can be adjusted dynamically in realtime or after recording When the tracks are mixed down to a single track the dynamics of the mix can be adjusted again Finally in the mastering process dynamic range can be adjusted for the purpose of including multiple tracks on a CD and giving the tracks a consistent sound In summary audio can be manipulated through hardware or software the hardware can be analog or digital the audio can be processed in segments or holistically and processing can happen in realtime or after recording The information in this section is based on digital dynamics processing tools hard limiting normalization compression and expansion These tools alter the amplitude of an audio signal and therefore change its dynamicsithe difference between the softest and the loudest part of the signal Limiting sets a maximum amplitude Normalization nds the maximum amplitude sample in the signal boosts it to the maximum possible amplitude or an amplitude chosen by the user and boosts all other amplitudes proportionately Dynamic compression decreases the dynamic range of a selection This type of compression has nothing to do with file size Dynamic expansion increases it The purpose of adjusting dynamic range is to improve the texture or balance of sound The texture of music arises in part from its differing amplitude levels Instruments and Q Aside You should be careful to distinguish among the following possible dynamic range as a function of bit depth in digital audio actual dynamic range of a particular piece of audio and perceived loudness of a piece The possible dynamic range for a piece of digital audio is determined by the bit depth in which that piece is encoded In Chapters 1 and 4 we derived a formula that tells us that the possible dynamic range is equal to approxim ately 6 n dB where n is the number of bits per sample For CD quality audio which uses 16bit samples this would be 96 dB However a given piece of music doesn39t necessarily use that full possible dynamic range The dynamic range of a piece is the difference between its highest amplitude and lowest amplitude sample The overall perceived loudness of a piece which is a subjective measurement is related to the average RMS of the piece The higher the average RMS the louder a piece seems to the human ear RhISi rootmeansquareiis explained in Chapter 4 voices have their characteristic amplitude or dynamic range The difference between peak level amplitude and average amplitude of the human voice for example is about 10 dB In a musical composition instruments and voices can vary in amplitude over timei a ute is played softly in the background vocals emerge at medium amplitude and a drum is suddenly struck at high amplitude Classical music typically has a wide dynamic range Sections of low amplitude are contrasted with impressive high amplitude sections full of instruments and percussion You probably are familiar with Beethoven s Fifth Symphony Think of the contrast between the rst eight notes and what follows BUMBUMBUMBAH BUMBUMBUMBAH Then softer In contrast elevator music or Muzak is intentionally produced with a small dynamic range Its purpose is to lie in the background pleasantly but almost imperceptibly Musicians and music editors have words to describe the character of different pieces that arise from their variance in dynamic range A piece can sound punchy wimpy smooth bouncy hot or crunchy for example Audio engineers train their ears to hear subtle nuances in sound and to use their dynamics processing tools to create the effects they want Deciding when and how much to compress or expand dynamic range is as much art as science Compressing the dynamic range is desirable for some types of sound and listening environments and not for others It s generally a good thing to compress the dynamic range of music intended for radio You can understand why if you think about the way radio sounds in a car which is where radio music is often heard With the background noise of your tires humming on the highway you don t want music that has big differences between the loudest and softest parts Otherwise the soft parts will be drowned out by the background noise For this reason radio music is dynamically compressed and then the amplitude is raised overall The result is that the sound has a higher average RMS and overall it is perceived to be louder There s a price to be paid for dynamic compression Some soundsilike percussion instruments or the beginning notes of vocal musicihave a fast attack time The attack time of a sound is the time it takes for the sound to change amplitude With a fast attack time the sound reaches high amplitude in a sudden burst and then it may drop off quickly Fastattack percussion sounds like drums or cymbals are called transients Increasing the perceived loudness of a piece by compressing the dynamic range and then increasing the overall amplitude can leave little headroomiroom for transients to stand out with higher amplitude The entire piece of music may sound louder but it can lose much of its texture and musicality Transients give brightness or punchiness to sound and suppressing them too much can make music sound dull and at Allowing the transients to be sufficiently loud without compromising the overall perceived loudness and dynamic range of a piece is one of the challenges of dynamics processing While dynamic compression is more common than expansion expansion has its uses also Expansion allows more of the potential dynamic rangeithe range made possible by the bit depth of the audio fileito be used This can brighten a music selection Using downward expansion it s possible to lower the amplitude of signals below the point where they can be heard The point below which a digital audio signal is no longer audible is called the noise oor Say that your audio processing software represents amplitude in dBFsidecibels full scaleiwhere the maximum amplitude of a sample is 0 and the minimum possible amplitudeia function of bit depthiis somewhere between 0 and 700 For 16bit audio the minimum possible amplitude is approximately 796 dBFS Ideally this is the noise oor but in most recording situations there is a certain amount of low amplitude background noise that masks low amplitude sounds The maximum amplitude of the background noise is the actual noise oor If you apply downward expansion to an audio selection and you lower some of your audio below the noise oor you ve effectively lost it On the other hand you could get rid of the background noise itself by downward expansion moving the background below the 96 dB noise oor To understand how dynamic processing works let s look more closely at the tools and the mathematics underlying dynamics processing including hard limiting normalization compression and expansion We ve talked mostly about dynamic range compression in the examples above but there are four ways to change dynamic range downward compression upward compression downward expansion and upward expansion as illustrated in Figure 53 The two most commonly applied processes are downward compression and downward expansion You have to look at your hardware or software tool to see what types of dynamics processing it can do Some tools allow you to use these four types of compression and expansion in various combinations with each other 0 Downward compression lowers the amplitude of signals that are above a designated level without changing the amplitude of signals below the designated level It reduces the dynamic range Upward compression raises the amplitude of signals that are below a designated level without altering the amplitude of signals above the designated level It reduces the dynamic range Upward expansion raises the amplitude of signals that are above a designated level without changing the amplitude of signals below that level It increases the dynamic range 0 Downward expansion lowers the amplitude of signals that are below a designated level without changing the amplitude of signals above this level It increases the dynamic range downward upward oompron expansion original oompreed original expanded audio dynamic audio dynamic selection range selection range upward downward oompron expansion Figure 53 Types of dynamic range compression and expansion Audio limiting as the name implies limits the amplitude of an audio signal to a designated level Imagine how this might be done in realtime during recording If hard limiting is applied the recording system does not allow sound to be recorded above a given amplitude Samples above the limit are clipped Clipping cuts amplitudes of samples to a given maximum andor minimum level If soft limiting is applied then audio signals above the designated amplitude are recorded at lower amplitude Both hard and soft limiting cause some distortion of the waveform Normalization is a process which raises the amplitude of audio signal values and thus the perceived loudness of an audio selection Because normalization operates on an entire audio signal it has to be applied after the audio has been recorded The normalization algorithm proceeds as follows nd the highest amplitude sample in the audio selection determine the gain needed in the amplitude to raise the highest amplitude to maximum amplitude 0 dBFS by default or some limit set by the user 0 raise all samples in the selection by this amount A variation of this algorithm is to normalize the RMS amplitude to a decibel level speci ed by the user RMS can give a better measure of the perceived loudness of the audio In digital audio processing software prede ned settings are sometimes offered with descriptions that are intuitively understandableifor example Normalize RMS to 10dB speech Often normalization is used to increase the perceived loudness of a piece after the dynamic range ofthe piece has been compressed as described above in the processing or radio music Normalization can also be applied to a group of audio selections For example the different tracks on a CD can be normalized so that they are at basically the same amplitude level This is part of the mastering process Compression and expansion can be represented mathematically by means of a transfer function and graphically by means of the corresponding transfer curve Digital audio processing programs sometimes give you this graphical view with which you can specify the type of compression or expansion you wish to apply Alternatively you may be able to type in values that indicate the compression or expansion ratio The transfer function maps an input amplitude level to the amplitude level that results from compression or expansion If you apply no compression or expansion to an audio le the transfer function graphs as a straight line at a 450 angle as shown in Figure 54 If you choose to raise the amplitude of the entire audio piece by a constant amount this can also be represented by a straight line of slope l but the line crosses the vertical axis at the decibel amount by which all samples are raised For example the two transfer functions in Figure 54 show a 5 dB increase and a 5 dB decrease in the amplitude of the entire audio piece 96 dB 36 dB QB dB Figure 54 Linear transfer functions for 5 dB gain and 5 dB loss no compression or expansion To apply downward compression you designate a thresholdithat is an amplitude above which you want the amplitude of the audio signal to be lowered For upward compression amplitudes below the threshold would be raised Figure 55 shows the transfer function graph corresponding to downward compression where the rate of change of sample values higher than 740 dB is lowered by a 21 ratio Figure 56 Supplements on dynamics processing E interactive it P worksh eet shows the traditional view Compression above the threshold is typically represented as a ratio asb If you indicate that you want a compression ratio of a39b then you re saying that above the threshold for each a decibels that the signal increases in amplitude you want it to increase only by b decibels For example if you specify a dynamic range compression ratio of 21 above the threshold then if the amplitude raises by 1 dB from one sample to the next it will actually go up after compression by only 05 dB Notice that beginning at an input of 740 dB and continuing to the end the slope of the line is b a l 2 an as u as ivadmw Settian Rama Yhieshald in as a sa lEslnw 4 da Output Gain u as E DPi m PBSKzn pi vl w Figure 56 Downward compression traditional View from Audition Often a gain makeup is applied after downward compression You can see in Figure 56 that there is a place to set Output Gain The Output Gain is set to 0 in the gure If you set the output gain to a value g dB greater than 0 this means that after the audio selection is compressed the amplitudes of all samples are increased by g dB Gain makeup can also be done by means of normalization as described above The result is to increase the perceived loudness of the entire piece However if the dynamic range has been decreased the perceived difference between the loud and soft parts is reduced i mama Damageaanmmng 1 75mm j DPyemllPasualwvewew E W E Figure 57 Upward compression by 21 below 30 dB from Audition Upward compression is accomplished by indicating that you want compression of sample values that are below a certain threshold or decibel limit For example Figure 57 shows how you indicate that you want samples that are below 730 dB to be compressed by a ratio of 21 If you look at the graph you can see that this means that sample values will get larger For example a sample value of 780 dB becomes 754 dB after compression This may seem counterintuitive at first since you may think of compressing something as making it smaller But remember that it is the dynamic range not the sample values themselves that you are compressing If you want to compress the dynamic range by changing values that are below a certain threshold then you have to make them larger moving them toward the higher amplitude values at the top This is what is meant by upward compression With some tools it s possible to achieve both downward and upward compression with one operation Figure 58 shows the graph for downward compression above 720 dB no compression between 720 and 760 dB and upward compression below 760 dB To this an output gain of 4 dB is added An audio file before and after such dynamics processing is shown in Figure 59 The dynamic range has been reduced by both downward and upward compression Sometimes normalization is used after dynamic range compression If we downward and upward compress the same audio file and follow this with normalization we get the audio file pictured in Figure 510 namits Prutessing Effect P1252 B 46 mum mus Relegse gandumDng 7 Samngs 2 1 2 mmwess ampensatmn Gem 4 a Uncompressed audio I Compressed audio Figure 59 Audio file before and after dynamics processing 1 LM mm in Figure 510 Downward and upward compression followed by normalization It is also possible to compress the dynamic range at both ends by making high amplitudes lower and low amplitudes higher Following is an example of expanding the dynamic range by squashing at both the low and high amplitudes The compression is performed on an audio le that has three singlefrequency tones at 440 Hz The amplitude of the rst is 5 dB the second is 3 dB and the third is 712 dB Values above 4 dB are made smaller downward compression Values below 710 dB are made larger upward compression The settings are given in Figure 512 The audio le before and after compression is shown in Figure 5 1 la and Figure 51 lb The three sine waves appear as solid blocks because the view is too far out to show detail You can see only the amplitudes of the three waves m 4 m a Before compression b After compression Figure 511 Audio file three consecutive sine waves of different amplitudes before and after dynamic compression Effect Pie cum l Section Ram theshnld LCamvaQS V 5 l have 1 0 90 0 Output Gain u as shave 4 dB 394 dB J h l W was g WE DPvgmiiigasgv aupyg v i w39 50K Carmel Clvsa Figure 512 Compression of dynamic range at both high and low amplitudes from Audition There s one more part of the compression and expansion process to consider The attack of a dynamics processor is de ned as the time between the appearance of the rst sample value beyond the threshold and the full change in amplitude of samples beyond the threshold Thus attack relates to how quickly the dynamics processor initiates the compression or expansion When you downward compress the dynamic range a slower attack time can sometimes be more gradual and natural preserving transients by not immediately lowering amplitude when the amplitude suddenly but perhaps only momentarily goes above the threshold The release is the time between the appearance of the rst sample that is not beyond the threshold before processing and the cessation of compression or expansion
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'