Which section for these test pieces ?

Discussion in 'The Rehearsal Room' started by Sandy Smith, Mar 16, 2014.

  1. Anno Draconis

    Anno Draconis Well-Known Member

    Messages:
    3,191
    Location:
    Huddersfield
    All I know (from one go their players) is that they were playing it during the year before it was the area piece - so 2007. However that's not to say that it was written then, or that they didn't also play it earlier. If I find out, I'll update...

    I'm not sure I agree - firstly those bands looking to be promoted rather than simply protect their current position will want to push themselves with a tough piece, I'd have thought. Secondly, how often do we hear both players and MDs complain about being given a piece that is "too easy" and therefore uninteresting to rehearse? I think the "safe" choices might be made in those bands trying to avoid a relegation, certainly, but I very much doubt it would happen across the board.

    Interesting that are two directly contrasting views on this - Dave arguing that (for the Nationals series at least) bands are likely to play it safe in order to protect their grading, while Philip fears that bands would pick excessively difficult pieces in order be rewarded disproportionately for flawed performances. However, I'd give the same answer to both, which is to say that a well balanced marking system is perfectly capable of controlling both factors and in the end the wider adoption of own-choice works for the major contests would simply increase the pressure on an MD to get the piece selection absolutely right. Which is fine by me, actually. A good MD ought to be able to judge what their band could do if pushed, and push them just enough to get there without killing or disillusioning the players (or replacing a third of them with bought in deps). In fact, the more I think about it, that pretty much summarises a brass band conductor's job. In a lot of ways, the set test-piece can occasionally give a poor MD a get-out clause and discourages critical self-analysis ("the band played badly because the piece was too hard" as opposed to "the band played badly because my rehearsal technique on this piece was poor" for example). Having to pick an own choice piece that suits the band from a defined range of music forces a greater awareness of what the band can do, can't do or might do if pushed, and requires MDs to acquire a wide ranging knowledge of the potential repertoire that would suit their band - something I don't feel happens enough currently, which is one reason why we hear the same pieces over and over again.

    The marking scheme I suggested earlier is just a kind of rough first draft, and could quite easily be tweaked to encourage OR discourage harder selections. In the end though, a band picking a piece that's too hard are going to fail on both musical and technical fronts, weighting notwithstanding. The awarding of a "technical mark" simply makes explicit a process which must happen currently anyway - an adjudicator must pay attention to ensemble, intonation, rhythmic accuracy, etc regardless of whether specific marks are explicitly awarded for these. The main difference that a purely technical mark would make is therefore one of perceived transparency, I reckon. Note also that I absolutely don't suggest weighting the "musical" mark; a wonderfully musical performance of Saddleworth Festival Overture should be marked the same as an equally wonderful rendition of Blitz, Oceans or Spiriti.
     
  2. Sandy Smith

    Sandy Smith Member

    Messages:
    185
    Can't agree with the choice of three test piece scenario. The idea of a band not being allowed promotion although their marks suggest they deserve it because they haven't played one of the "difficult" pieces is unworkable.

    Without some sort of technical merit mark how is it decided what constitutes a hard, medium or easy challenge for each section ?

    Still think Andrew is on the right track here with the technical merit mark. His argument has persuasded me to chance my initial misgivings.
     
  3. Sandy Smith

    Sandy Smith Member

    Messages:
    185
    Absolutely spot on.
     
  4. Tom-King

    Tom-King Well-Known Member

    Messages:
    1,976
    Location:
    Gloucestershire
    That idea was just an off-head idea. I think it has some merit, but I can be persuaded against it.
    I can see (and agree) that it might be tough to swallow for a band to come top of the pile without having attempted the hardest piece and be denied promotion.
    With that said, do points alone suggest that a band is ready to be promoted? Would a band that isn't confident enough in its abilities to take on choices that bridge the two sections (to use your examples - Pageantry/Three Figures/Dances and Arias/Ballet for Band or something similar) really be ready to move up to a section where these are the easiest they could see, I'm not sure they would be?
    I'm not suggesting that the bands have to choose the hardest piece for all three years that make up the promotion tables, one of those three I don't think is unreasonable.


    I think you misunderstand me about technical merit marks - I'm all for it, and that was included in what I wrote (I may not have been clear enough).
    I don't see any reason that the two ideas can't go together - weighting allows for the challenge to be offset by reward, the requirement for a promoted band to be taking on harder material suggesting that they're suitable for the section above.


    What I think is unworkable is trying to set fair weightings down for long lists of testpieces, within a very limited timeframe (from when a band declares their choice to the contest date itself), without the bands in question having any real idea of what the weighting will be before they choose the piece. Hardly fair or transparent, is it?
    Even less workable is the idea that every test piece out there has a set weighting and that's the end of it. There are far too many test pieces for that, besides which any mark is going to be subjective and if you don't have the same people to do it every time (is this realistic?), what do you do with commissions after the fact (can you get the same people together in 1,2,10 years time to add a new piece)? No, that's simply not workable.


    The idea behind a short list (three, for example) is:
    Firstly - Adjudication. It simply isn't fair to expect a decent set of results from first to last with the number of bands you get in each section at the areas if we're allowing (potentially) every band to pick a different piece (and in certain areas and sections, that could be a huge number). As these results matter much more to bands than own choice contests, this is an important consideration in my opinion and probably why we've stuck to one piece for so long.
    Secondly - Fair Weighting. It's both more manageable and more objective to weight three pieces of varying difficulty relative to each other than it is to try and weight a much larger number (which could be much closer in difficulty).
    Thirdly - Fairness and Transparency to competitors. If everyone is aware of the choices of pieces and what the weightings are going to be, then the choice and it's consequences are entirely down to the band/MD (which of the set pieces to play) and not at all down to a weighting-panel deciding what each bands piece is to be weighted after they've already chosen it (that would be incredibly unfair).



    I'm curious whether the part you objected to was having a shortlist to choose from (I suggested 3, wouldn't go much higher than that personally) or whether its only the other suggestion I added that you disagreed with?
    If your disagreement is with giving bands a short list to choose from then we'll have to agree to disagree, personally wouldn't like the thought of the most important contests being loose own-choice.
     
    Last edited: Mar 23, 2014
  5. Sandy Smith

    Sandy Smith Member

    Messages:
    185
    I am working on the idea but forward by Andrew Baker and Statto on exactly the principle that every piece has a set weighting. Can it really be done any other way ?

    With regard to the same people doing it and adding on new music how does the same principle of technical difficulty work in sports like Ice Skating, Diving and the new snowboard type events recently seen at the Olympics ? Surely they must have a board or panel assessing new moves etc. ? I also assume that the personnel on these panels will change with time. Do they also reassess the weightings over the years to accommodate general development of techniques etc. It must be workable in these sports so why not in our case ?
     
  6. Sandy Smith

    Sandy Smith Member

    Messages:
    185
    re. Firstly - don't understand why you couldn't expect a fair set of results from 1st to last with technical weighting and marks for the playing.


    re. Secondly - why is having only three pieces more managable and objective ? ... and for who ?


    re. Thirdly - this point is covered by having predetermined weightings for all test pieces, as set out in post #85. This is how sport does it. Why not us ?


    For me a short list of three is just too small and again just pidgeonholes everything too much. If the weightings are in place I would be looking at choosing catagories ( composer / nationality / era of composition or specific contest use ) which allow lists of at least 8.9 or 10 pieces.


    If you look at the quote in post #83 from Andrew he puts it across far better than I can. This idea of increasing knowledge of repertoire is absolutely central to my whole agrument over this matter.
     
  7. Tom-King

    Tom-King Well-Known Member

    Messages:
    1,976
    Location:
    Gloucestershire
    Can it even be done that way? Surely there are FAR too many pieces for that to be a realistic undertaking?

    As an idea to enhance own-choice contests it would be very interesting - though too much work involved to believe it would actually happen.


    "Can it really be done any other way?"
    --Well if you're looking at this from the perspective that the area contests should be own-choice (or own choice within a loose framework) then no, there isn't any other way - you'd have to weight every possible piece.
    --If we're looking at short lists for bands to choose from (say 2-5) then its perfectly manageable - the selection panel weights those options on a one-off basis for that particular contest and it's up to the bands which they want to pick with all the information in front of them.


    I'm not sure you'd get all that much agreement on an own-choice area contest (even with the caveat of 10-year windows to pick pieces from) at any rate. I can't say I'd be too happy with it.


    Can't say I know too much about where these sports get their gradings from - but I'd bet they have a good deal more resources available than the decentralised amateur music scene that we represent. We have some awfully talented and dedicated people, but I really don't think this is doable.

    I'd assume that coming up with a way of setting weightings for a single testpiece would be extremely complicated and time consuming (then you have the sheer number of pieces to look at and making them fairly balanced against each other), do we have the resources to get that done?

    And if the weightings are fairly balanced, are bands not simply going to choose the pieces they already know (as there's no penalty for doing so)? Or is how often a piece is used (and how many recordings available) weighed against the general difficulty of the piece?
    The most popular from each composer or era will still get the most usage even within the suggestions you're making - having 15 (3 per section) pieces per area contest that may not be all that well known potentially exposes bands to more music than allowing bands to go to the pieces that they already know (or know of) that fit the bill...


    Serious question - what's the goal of this discussion for you Sandy?

    Are we talking idealistic goals and hypothetical scenario's (how we wish we were able to do it), or are we talking about obtainable and implementable changes?

    Although I don't share the vision of an own-choice area contest, I can certainly agree that weighting all pieces would make for fairer adjudications of own-choice competitions generally. I don't see it as practical, but that's another point entirely.
    Would you see my idea of a short-list choice of pieces for the areas (around 3) with weightings set by the selection panel as an improvement on the single-testpiece format we have now? Does it not seem a lot more obtainable (regardless of whether you'd like to see wider choice available)?



    (I have to go out, this will do for now).
     
  8. Bob Sherunkle

    Bob Sherunkle Active Member

    Messages:
    451
    I can't help thinking that a significant point is being missed in this discussion.

    If changes were to be made to the format of the national contest it would not happen via a consensus on this or any other forum but at the wish of the contest owner.

    Also.

    Would the degree of difficulty weighting for a piece change if the MD re-scored it ?

    Bob
     
  9. euphoria

    euphoria Member

    Messages:
    197
    Location:
    DENMARK
    In all the european national championships I know of, the own choice element has allways been an integral part of the contest. most often as set piece + own choice. I have myself been playing in this format in every section of the danish championships since 1978 and I have very rarely heard complaining about the adjudicators ability to compare the performances of the different pieces of music. More often than not these adjudicators have been british and I don't think that ability suddenly vanishes as they cross the North Sea on their way back to britain.

    I don't think it is nescessery with a specific weight of difficulty attached to each possible piece of music. I firmly believe that the adjudicators automatically factor these considerations into their deliberations, and if you don't trust that they are able to do that, how can you trust them to adjudicate at all.

    There is also the practicallity of the weighting system (as pointed out above). In ice skating there is a very small amount of possible jumps and there are years between a new jump is introduced (mind you I can do a few that has never been graded - the most famous involves flapping my arms like an albatros on steroids making half a pirouette, then doing a two-footed jump and landing on my backside very elegantly).

    Erik
     
  10. MoominDave

    MoominDave Well-Known Member

    Messages:
    6,547
    Location:
    Oxford
    This touches directly on a more general difficulty with the idea - which should rate higher for technical difficulty - a piece with fiendish corner parts and no interest for anyone else, or a piece with less fiendish end parts but general difficulty all around the stand? Using a single coefficient, I see no way to distinguish between these two cases - but many bands would find one or the other very much easier to perform.

    I think it is a practical if lengthy undertaking to assign every brass band piece ever written a single-number difficulty rating coefficient (though they would need to be regularly revised, an equally large undertaking), but I have issues with the notion that that number is meaningful to apply without extreme caution. At best, it's going to be a smudge across a whole lot of variables.
     
  11. Statto

    Statto Member

    Messages:
    175
    Don't know the specific answers to these questions but for diving, for example, my hunch is that a panel of experts reviews the relative difficulty of the dives from time to time. I've found this document, which I assume to be the most recent 'degree of difficulty' table. Lots of numbers but, for example, the four columns under '10 meter' starting on page 5 sets out the dives available to Tom Daley in his specialist event. The dives range in difficulty from 1.3 to 4.7.
    http://usadiving.org/wp-content/uploads/2011/09/DegreeofDifficultyTables.pdf

    In practice, there are generally five judges. The top and bottom scores are discarded and the remaining three scores are averaged, before being multiplied by the 'degree of difficulty' factor.

    A variation of this method of scoring is certainly viable for own choice band contests but the essential starting point would be to grade all existing compositions. Whether it's a number between 1 and 2 or 1 and 5 doesn't really matter but it would be an extremely useful exercise, even if it didn't end up being used for adjudication purposes. Armed with the 'degree of difficulty' table, the music panel would be well placed to avoid the obvious errors made when setting some of the area pieces recently!
     
  12. euphoria

    euphoria Member

    Messages:
    197
    Location:
    DENMARK
    What is difficult to one player is perhaps easy to another. I have always been much better at the technical stuff than the lyrical. I remember reading 4br's report from the 2005 Europeans where Extreme Makeover was the set test piece. I haven't played it myself, but have heard it enough times to realise that there are plenty of technically difficult parts in it. What did strike me from the reports was that most bands were in big troubles in the opening which is a simple quartet - and there wasn't a semiquaver in sight!

    I am only saying that the difficulty of a piece is a very complex matter, that is very hard to quantify into a specific number.
     
  13. MoominDave

    MoominDave Well-Known Member

    Messages:
    6,547
    Location:
    Oxford
    Gordon, I'd be interested to know if you have any thoughts on the problem mentioned by the eminent Dr Sherunkle, touched on in my last post, and further elaborated by our Danish colleague...

    To illustrate the problem... Let's assume that a multiplication factor for each piece is to given, for values between 1 and 5. Or rather, let's assume that we rate pieces with two numbers, using the same scale - one for ensemble difficulty and one for solo difficulty...

    Under this scheme, we might settle on some numbers along the following lines (picking a few pieces I've played at various times):
    St. Magnus: Solo 4.3, Ensemble 4.7.
    Harmony Music: Solo 4.5, Ensemble 4.4.
    Brass Triumphant: Solo 2.6, Ensemble 3.2.
    Mountain Views: Solo 1.8, Ensemble 2.2.
    Tournament for Brass: Solo 3.8, Ensemble 2.4.
    High Peak: Solo 2.5, Ensemble 3.0.
    The Torchbearer: Solo 4.4, Ensemble 3.5.
    Energy: Solo 2.5, Ensemble 4.5.
    The Plantagenets: Solo 3.6, Ensemble 3.2.

    Obviously these are plucked out of thin air and are very debatable [illustrating further issues for whoever would do this to grapple with - and further note that even the categories chosen are highly dubious], but they do illustrate the basic principle that one band can find one piece difficult and another easy, while a different band with different strengths might consider those two pieces exactly the opposite way around. From that, it is easy to deduce that the use of only a single number to describe each piece will allow bands to tactically pick pieces that suit them that have a high net contesting worth. Of course, this happens now informally, but it seems perverse to enshrine the tactic, allowing bands to progress through the sections without having their weaknesses addressed.

    At root, the problem is that ordering bands in a musical contest is an extremely ill-defined task...
     
  14. Statto

    Statto Member

    Messages:
    175
    This has been an absorbing thread, one of the best on here for a while, but if I think back to Sandy's original question 'which section for these test pieces?', I feel that the music panel should indeed be prepared to assign a number (or pair of numbers, per your post) to all compositions and then set pieces accordingly. Sticking with the 1 to 5 scale, a championship section piece might be at least 4.2, whereas a 1st section piece would be between 3.4 and 4.1, 2nd section between 2.6 and 3.3, etc. I think that most of us on the forum would agree that the last three 1st section area pieces have not been suitable. I accept that there is the added difficulty of grading new pieces on the basis of the score only but the music panel should be qualified to do this!

    Utilising the 'degree of difficulty' for adjudication purposes would be a big step but I genuinely believe it would simply formalise the process that most adjudicators already follow, consciously or otherwise. My experience (as a listener for a long time now) is that bands generally play pieces that are too difficult, no doubt fearful that they have no chance of winning if they play something 'easier'. I would rather listen to several excellent performances, rather than just a couple, and if the earlier suggestion of splitting the mark into 'artistic impression' and 'technical merit' would produce a more enjoyable listening experience then I'm all for it.
     
  15. Sandy Smith

    Sandy Smith Member

    Messages:
    185
    It has been great to follow and contribute to a debate which has developed with many interesting and thoughtful points of view over the last number of days.To answer Tom's question in post # 87 "what's the goal of this discussion for you Sandy?" - it was always to stimulate discussion on what I believe is the most important aspect of our hobby - the music.It is by far our most under used resource and I believe we have to take some drastic action to stimulate the knowledge of and appreciation for a huge number of pieces with currently lie dormant in our libraries. As usual in British banding apathy rules. This is not helped by having no effective compulsory national body in England ( like SABA) as a platform for debating issues such as these and as a consequence bands, as a collective body, are impotent. Look at the recent issue with the registry.


    To comment further on Tom's question in post #87 - "Are we talking idealistic goals and hypothetical scenario's (how we wish we were able to do it), or are we talking about obtainable and implementable changes?"- I would like to believe that enough debate can be stimulated for proper discussion with a chance to implement some badly needed changes to a system which I believe is no longer fit for purpose.


    In defence of the music selection panel I believe they have a virtually impossible job to cater for the sections as they stand. The championship section is bloated beyond belief and there are constant gripes on a regular basis on pieces chosen for sections 1-4. I believe bands and conductors have to take more responsibility for the music chosen. As an example I am sick of hearing complaints after contests on the lines of " our timpani player was criticised in the remarks but we were beaten by a band who had no timps". Choosing from an own choice list which includes pieces with percussion requirements for 2,3, or even 4 players eliminates this.


    Regarding the technical weighting of pieces, every adjudicator, and indeed the music panel, already does this as Gordon and Andrew have already said. The problem currently is that these are left to each individual conception which leaves bands none the wiser before the event.


    Some things though can be predicted to happen in the near future.


    1) If we are to agree that 1st section bands in particular have been ill served in recent years, through no fault of their own, then they may look forward to a much sterner test in the 2015 areas - one which, because of what they have had to play in recent years, many will find far too difficult. I no longer think that it is feasible to choice only two test pieces which serve the range of abilities in the number of bands in the top 2 sections.


    2) I believe an element of own choice will be introduced, initially through the 4th section, in Scotland in the near future. SABA have invested too much time, effort and money in youth development to have it hijacked by the vagrancies of an unsuitable set test piece. Look at what SABA have already done with pre-draws at the behest of the bands. They, like our european neighbours, have a coherent national body able to implement decisions on behalf of all its member bands. They won't be afraid of making radical decisions in order to help the development of their players and bands. Once that barrier is broken and more people see that you can still adjudicate fairly and properly then more will follow.


    Look toward europe -while most european brass band federations try to equip their players, conductors and bands for banding in the 21st century English banding will continue to operate round a fragmented late 19th / early 20th century model which is no longer fit for purpose.
     
  16. Sandy Smith

    Sandy Smith Member

    Messages:
    185
    To avoid any confusion can I say that the letter published today, 26th March 2014, on 4BR under my name was sent to them on 11th February 2014, before the commencement of the recent round of area contests.
     
  17. jamieow

    jamieow Member

    Messages:
    210
    Location:
    Wrexham
    I also played it at the 1999 Finals with Milnrow on 1st EEb and we came 2nd.

    If my memory serves me right, we gave a cracker of a performance and really enjoyed getting to grips with it at rehearsal.
    Difficult for 2nd Section.
    Wasn't it the Open piece in about 1980 or so? Think i have a tape with Besses playing it.
     
  18. James Yelland

    James Yelland Active Member

    Messages:
    1,606
    Location:
    Hinckley, Leicestershire
    I mentioned it elsewhere, but Cry of the Mountain (1st Section choice) is graded 'C' (medium difficulty) by the publisher, Obrasso; as is Three Spanish Impressions (4th Section choice). So either the selectors don't know what they're doing, or Obrasso's grading system is flawed, or both. Or something else.
     
  19. Statto

    Statto Member

    Messages:
    175
    British Open 1982, won by Besses.

    Also notable for being one of only 4 'failures' in major contests by the Black Dyke / Peter Parkes combo! Otherwise phenomenal record here.
     
  20. stopher

    stopher Member

    Messages:
    479
    Location:
    Bangor
    That was probably another errata by obrasso in three Spanish impressions. Can't believe how poor the quality control was
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice