TRENDING NEWS

POPULAR NEWS

Audio Signal Processing

What is the meaning of "strobe" with respect to digital signal processing?

A strobe is a signal that defines a sampling period for another signal. Strobes are usually level sensitive. For example, an analog-to-digital converter often has a sample-hold circuit in front of it controlled by a strobe: "on" means sample, "off" means hold.

In signal processing the strobe-open time is called the "aperture". The fact that it is non-zero in real systems leads to an artifact called "aperture error".

A clock by contrast is edge-sensitive. Clocks too can have a kind of aperture error - jitter - which also causes artifacts in processing.

Oh yes, links below...

How important is audio signal processing?

It can make the difference between an enjoyable listen, and an irritating one. Sound waves evoke strong emotions in all creatures with ears. If an audio signal is distorted, for instance, or if it has certain frequencies that are harsh, the average listener will not be able to tell WHAT is wrong, but they will still hear it AS wrong. Example, if a person with an untrained ear is out in a club, and the PA is not dialed in properly, the person will want to leave after the first tune or two by the band. He/she may decide that "the band sucks" or that it's "too loud" or even "not loud enough" because the training isn't present to pinpoint the problem. All they know is that they are feeling suddenly unhappy and irritable, and MAYBE they'll figure out it has something to do with the sound, but they won't know what. They will just leave, and probably never come back to the club/band/DJ.

What is signal processing in audio?

Most (nearly all) sound cards and microphones sample at a rate of 8kHz initially and undergo a resampling process digitally to convert them to a higher sample rate like the standard CD quality 44.1kHz.  Humans hear sounds between approximately 20 and 20kHz. To reconstruct a signal at ~20kHz, you have to sample twice as fast. So, 44.1kHz accopmlishes this nicely and allows for some additional space called the transition width to be included to make sure 20kHz is captured. There's lots of professional equipment that records at higher rates than 8kHz, like 44.1kHz, and 192kHz, etc. Doing it digitally gives an approximation but not the true sound signal. So why bother with anything higher than 44.1kHz if that's high enough to capture the full range of human hearing? You'll see this quite frequently in digital signal processing. It's called oversampling and it's done to avoid something called aliases.  Frequencies beyond your sampling frequency that "fold" or "mirror" back over a point equal to half your frequency and create false signals are called aliases. Basically, it's noise. It's not supposed to be there, it's not your signal,and it sits at a frequency that SHOULD be part of yoru signal, but because the alias is so strong, you can't hear the signal at that frequency. Well, oversampling prevents this aliasing, allows you to remove it digitally, and increases the area of your alias so signals are much more likely to alias into a portion of your oversampled frequencies that you don't care about because it's being removed digitally.This digital filtering is done with an FIR (Finite Impulse Response) filter in audio systems, because it has a linear group delay (all frequencies experience the same delay equally). So the sound at any point in the time domain is kept consistent in the frequency domain until it can be translated back to the time domain.

What is the scope of audio signal processing? What kind of R&D exists? What universities are good for a master's in the same?

Audio signal processing has myriad of research problems to focus on. Also, as a musician, I'm quite fascinated by the potential that music processing holds. There's some serious amount of hard work put in researching in Hindustani music (IIT-Bombay) and Carnatic Music (IIT-Madras) in India alone. In the west, music processing has a good reach already. Needless to talk about speech.  We now have apps that listen to your voice and estimate your height. We have seen awesome products in the past like real-time translators and personal assistant systems. I hope this gives you some idea about the kind of research happening in audio. You might be interested in Coursera    - AcousticsCoursera - Music Discrete-Time Signal Processing

What is digital signal processing?

first, you take an analog signal, convert it to a digital form using an analog to digital converter. now we want to do something with that signal using some sort of computer system, which is digital. this is the essence of digital signal processing.

it could be used to compress audio, recognize speech, process your speech to sound like someone else, pick out objects from RADAR signals, and the list goes on to a ton of other things that use analog signals. essentially you need to do some operation with the signal, manipulating it using mathematics. filtering out noise is a big part of digital signal processing.

difficult is a subjective term. if you understand complex number mathematics, fourier transforms, linear time invariant signal theory, calculus, z transforms, and frequency domain, then you can get by, but it is still a challenge.

if not, then yes, it is very difficult, and you have some learning to do. if it is something you are interested in, a degree in computer engineering or electrical engineering should give you the related knowledge. the link is a good resource if you feel the urge to learn more about it.

it could be applied in remote control systems, especially since there can be noise on the same frequency you want to filter out, as well as needing to digitize the signal you are receiving in order to do something with it using a computer system.

Is audio signal processing a shrinking field?

It is really interesting that I had the very same chat with my advisor a while back. As it so happens, my advisor and I have a background in signal processing for audio. As a current PhD candidate, this had me worried about the future scope of my research. Let me try to share some of the points we talked about over our chat.As you rightly point out, all of the current work seems incremental. In fact, as my advisor demonstrated, it is not the nature of audio signal processing in particular, but of signal processing in general, that is growing obsolete. So how do we move ahead in such an environment? Thankfully, signal processing dove-tails well with Machine learning problems and that is no different for the case of audio. As an illustrative example of speech recognition, the initial attempts were completely reliant on purely signal processing ideas. The first big boost came about when GMM-HMM systems came to be used. Recently, there has been a second revolution after the advent of neural networks and the use of DNN-HMM systems for speech recognition. Essentially, the important lesson here is that, the future of audio signal processing is inter-disciplinary in nature. Several traditional pure signal processing problems can be re-interpreted and resolved using the present tools at our disposal in a more efficient manner. While this may not have been possible earlier due to constraints on memory, computation power and data, these barriers have been lifted these days opening new doors for MLSP research in audio.There is an added advantage in blending the areas together and developing multi-disciplinary skill-sets from our perspective. Most ML researchers lack the necessary background needed for audio processing. Audio signals are ephemeral in nature and have completely different characteristics to images and videos. So, although the ideas may have been developed by ML researchers, it is necessary to modify them to suit the nature of our data. In short, science and research never ceases to amaze and history bears testimony. Every time there has been a danger of stagnation, there has always been a new revolution.

What is music signal processing?

Music Signal Processing is branch of Digital Signal Processing and a very vast and interesting topic in its own. In short, as the name suggests, its the process of analyzing, processing and transforming the analog music signals into digital music or effects signals.If we consider any uniform and stable analog sine wave, the computer registers different positions of the wave at different intervals and thus reproduces those discrete positions digitally to produce digital waveform/signal. this process is called sampling. More the sampling, better the output signal and the music which we hear would be crystal clear and highly close to the original, but it can never be the exact original.The technical name for processing in this topic is termed as modulation and its been done with respect to three parameters of the signal/waveform, i.e. Amplitude, Frequency and Phase.Thus there are basically three types of modulation.Amplitude Modulation, Frequency Modulation and Phase Modulation.DSP is widely used in 21st century. Its major applications includeElectronics and telecommunication sector.Biomedical equipment.Music Production/Sound engineering.Military Machinery and advanced aviation.Computer generated algorithms such as voice and face recognition's, A.I.I've recently made a video on the basics of DSP and how it can be used in music production/sound manipulation sectors. Kindly check out the video.hope that helps.Thanks !!

What are some interesting DIYs in audio signal processing?

Thanks for A2A :)However i may not be the right person to answer it as i am not into audio signal processing till date.Although, i like to mention few sites i have come across.Digital Sound: MatlabMatlab Audio Processing ExamplesAudio Signal Processing Discussion Group/Forum on DSPRelated.comAudio Sources and Signal Processing (Analog and Digital)Audio Signal Processing for Music ApplicationsCourseraHope this helps :)

What are some ideas for audio signal processing side projects?

Have you ever heard of lock-in amplifiers? You can pick out a signal buried below the noise floor. With a modest amount of electronics, you could make a digital one. These things go for $4K+,https://www.thinksrs.com/downloa.....You could create a tunable filter bank, using a modest amount of electronics, and replace this thing (using cascaded sections of IIR biquads)https://www.thinksrs.com/downloa...Design IIR Filters Using Cascaded BiquadsYou could create a musical synthesizer that takes MIDI inputs, create a better synthetic horn section.. A lot of commercial synthesizers do a great job at emulating a piano, but do a terrible job emulating a brass section such that an audience can tell the difference. You could model a horn and create a set of differential equations that you can evaluate in real time using a 4th order Runge-Kutta iterative solver. With GPUs, we have an enormous amount of horsepower at our disposal. There are books and papers on this. There is a lot of good stuff, example code & books from CCRMA - the center for Computer Research in Music and Acoustics at Stanford.

What is the best audio signal processing library for Python?

Thanks for the A2A. I must admit I am still on the MATLAB wave for developing algorithms and have been meaning to switch to Python but haven’t done it yet! But I have some experience doing audio signal processing in Python. I think the best audio analysis library is LibROSA . See this for a comprehensive list of all audio libraries in Python. PyAudio is good for real-time audio read/write. This blog might be a good way to get started with real-time audio processing in Python. Hope that helps.I have done all my real-time audio development in C++, and you can’t get much better than that in terms of speed. You can look at the STK library in C++, for example. But it will be nice to be able to do this in Python.

TRENDING NEWS