Monday, June 20, 2011

Chapter 5 Digitalization: Section 5.1.1, 5.1.2 & 5.3.1

After one week of struggling with music theory, finally, here comes the material I couldn’t wait to read about! After skimming through this chapter, this chapter reminds me of a computer science class I took in high school, where I learned about memory computation and different file types, and a math contest problem, through which I got to know decibel as well as comb filtering. This chapter talks about many concepts with comparatively complicated meaning, taking me much time to understand, but compared with those music notations in Chapter 3, these make much more sense to me.

5.1.1 Analog vs. Digital describes the process of analog and digital conversion among and within devices. However, it might be easier to read if the sentences are divided more clearly, say, one sentence for each device. 5.1.2 Digitalization is fairly easy to follow in general. All the concepts are pretty straight forward except 5.1.2.4 and Noise Shaping in 5.1.2.2.5, but I guess it will become clearer after I read the algorithm in section 3. As the material is getting dry when I reached the beginning of 5.1.3 Digital Audio Formats and Transmission, where it starts talking about hardwares, I switched to 5.3, the math and science part. I want to mention that for the readers who are interested in knowing the algorithm behind and are able to read the material in section 3, reading section 2 followed by the corresponding algorithm in section 3 is a very good way to get to know how things actually work.

After reading 5.1.2.3 and 5.1.2.4, I am still very confused by how the sample points are chosen and how the sound waves are quantized. It still makes a lot of sense, but takes effort to think it through. I hope a complete example or demonstration that combines the quantization level and selection of sample points would be given. From my understanding, one sample is chosen after a fixed time slice, and the value is rounded to the lower quantization level. The higher bit depth is, the more quantization levels will be, and thus the closer the shape of the sample points make up is to that of the original sound wave, and the smaller the error wave is… Also I don’t quite understand the concept of SQNR. Does it just mean the bigger n we choose, the bigger SQNR will be, thus the noise will be really quiet comparing with the sound?

Flash Tutorial: Dynamic Range

Clearly states 2 meanings of dynamic range.

I really like the example that compares between Beethoven’s Symphony No. 5 and Deva Premal’s Sammasati. It clearly shows the difference between 2 audio pieces with different dynamic range. On the pages where quantization process is shown (rounding down), it would be nice to show the original position of the sample points, or vertical lines showing where the samples are chosen. I should have written this earlier in the assessment for Flash Tutorial: Sampling and Aliasing. Overall this is very nice learning supplement, and is easier to read than the text.

and read through 5.3.1 Mathematics and Algorithms for Aliasing. I could pretty much tell the algorithm when I was trying out the Flash Tutorial Sampling and Aliasing, but the Nyquist and Aliasing one demonstrates the algorithm much better.

Personal preference:

Can we horizontally line up Figure 5.9 and 5.10 and adjust the table contains Figure 5.11 and 5.12? It just looks quite messed up.

No comments:

Post a Comment