Tom-King and Euphonium Lite, thank you for your responses. I think perhaps my intentions were not clear, so let me try to make a coherent argument for the two things I am interested in (which are indeed separate). The first motivation has to do with designing a training tool/method that will help players get closer to their desired sound. The second has to do with documentation/exploration of changes in vibrato over time and comparing this to trends seen in other areas of music (more of an academic exercise). The basic tools I have developed so far take sound files of solo cornet (there can be background instruments as long as they aren't too loud at this point) and extract a series of note waves using deep learning AI and classical signals analysis to automatically quantify for each note: pitch, pitch variation, vibrato frequency, vibrato amplitude envelope, bends... These can be visualized, compared, clustered etc.
1. Training:
As Tom-King points out, top players have both an excellent control of the pitch, loudness, brightness, etc. of their instrument and a sound concept that they wish to employ at any given moment in a given context. Through years of training (and natural ability) these players have gained these skills so that their emotions/ideas can be readily translated into sound out of the bell. However, I believe that for a much larger group of players, approaching this level of mastery is very difficult. The problem with the 'throw them in the pool' method of training is that if the student starts out far away from the desired goal, their process of listening, playing, listening, repeat may be very inefficient and may never reach their desired goal. Many have a fine vibrato, but few can create the nuance and subtle variability that I think really separates the top players from the masses. Further, I find for myself that I can have trouble separating what I am trying to do with my muscles from the sound coming out of the horn. I can hear all the nuance when I record myself and play it back, but this is harder in real time as I am playing in the practice room. Using a real time spectrum analyzer (such as the Analysis function in the mobile app 'TE Tuner') I can see the oscillations that are on a pitch when trying to create vibrato. By trying to match wiggles on the screen to patterns from a recording in addition to using my ear (i.e. using two modalities of feedback), I have had an easier time learning the art. In the psychology literature, it is well known that using multiple sensory modalities speeds/enhances training. For me, the combined audio/visual feedback works better than just listening to get close to the desired result. I agree that this kind of a tool is unnecessary as training progresses. But for many, I think this would be a valuable aid on the path to mastery of fine-scale instrument control and vibrato. This would also allow students to directly see the effect of the different methods of vibrato production that Euphonium Lite mentions.
2. Music History:
It seems clear that patterns and usage of vibrato have changed significantly over the last 50-100 years in brass bands (this is quite evident if you read the various arguments on this and other forums). Similar changes can be seen in orchestral instrument playing and vocal singing (both classical and popular) and a reasonably large literature exists (particularly for singing) that discusses this. For orchestral trumpet playing, there is even some work on regional differences as mentioned by Euphonium Lite. I am interested in looking at cornet vibrato and using machine learning to quantify changes in time (and perhaps region) and compare them to changes we see in other instruments such as the voice. There has been discussion of this in the voice literature, trying to link these changes to greater societal changes. Could something similar (worldwide or UK specific) also be at play? This is certainly an exercise in social science but given the computational tools I think it would be rather straight forward.