Blind Pedal Shootout on YouTube

Started by R.G., February 23, 2009, 12:51:05 PM

Previous topic - Next topic

Caferacernoc

#40
I think the tests show that not only can all the overdrives be made to sound similar, but that the rest of the gear is so important. Almost any overdrive into a slightly distorted quality vintage or boutique tube amp can make great sounds. A Landgraff or Klon into a Behringer, not so much. I always notice when I watch YouTube videos of pedal tests by ProGuitarShop that it sounds like the test amp has been perfectly set to make the pedal sound as good as it possibly can. They make a point of saying they always use the same amp, solo65, whatever that is, for all the tests but you can tell when he plays the bypass sound that the SETTINGS of the amp are not the same for each video. So a lot of iffy pedals sound just like a great tube amp because, wait for it, they are pushing an already breaking up TUBE AMP with the bass turned way up.
I'm reminded of when I played in my last band. My rig was a Gibson 335 style Ibanez into a Traynor BassMaster. That amp is very JTM45 sounding and I played into a single 12" sealed cab I made myself. It was nearly the size of a 2-12" cab. Big crunchy sound. On the foor I had a tuner, wah, delay, boss eq, and 808 tubescreamer. The eq was mostly flat and only used for lead boost on the bridge pickup. The tube screamer was set for max volume and minimum distortion. I only used it for lead boost on the neck pickup. I usually was on the bridge pickup. And usually neither pedal was on. People would compliment me on my tone all the time and go,"ahhhh TS808!"   :P
Regardless, I am impressed by the low noise of the Hyde and think that there are good reasons to build or buy quality pedals. Great tests!

frokost

The videos are interesting to watch. They tell me a couple of things: Pedals sometimes sound different. Pedals sometimes sound the same. Different people like different things.

Regarding the scientific side - it would be a shame to reduce music into science.

puretube

Anti-parallel series diodes noisegate somewhere along the signalpath towards the end  :icon_wink: ?

DWBH

I liked all the videos. Cool stuff, and kudos to all the people over at Visual Sound ;)
I really like the new nifty features on those new Visual Sound stompboxes. I particularly liked the Son of Hyde. If memory serves me right, it's based around the Marshall Shredmaster, isn't it?

Also, I'm curious whether the preferred pedals would be the same if they had been tested in a different order....

R.G.

With all due respect to reverbie:
Quote from: reverbie on February 24, 2009, 05:24:07 AM
hmmmm....besides being skeptical of these type of commercial ads, i also come from the same school of thought as analogmike... after watching the youtube clips, it did appear that the visual pedal was second in every chain of pedals if counting from guitar. I could be wrong. Placement surely would have an impact on tone, specially since i saw it running directly after a buffered boss pedal, etc.
Skepticism is good. It is in fact skepticism that led to us doing this. But you are correct - we did have dogs in the hunt, we admit that up front, and we did a lot of backstage work to try to eliminate that as an issue.

You bring up an issue that may need tested, or at least eliminated by the design of a future experiment. By the way, as Mark can tell you, just the design of experiments is a technical specialty, all on its own. I have some familiarity with it, but am by no means a technical specialist in this field.

Does placement in the chain of similar devices, on its own, change sound enough to change the results of such an experiment?
Clearly, I don't know, because (a) I didn't think of that ahead of time and (b) I don't know the order of the effects in the test for sure well enough. I think this issue is worthy of a test all its own, designed properly. Maybe another way to look at this is I can assure you that there was no attempt to nobble the test results by careful, crafty placement of Visual Sound pedals second in line every time (second in line being the magic, perfect place to be?!?).

As an interesting sidelight, the first or second position in the test is the toughest position to be in, because the winner of that matchup must be individually compared against every other pedal in the lineup to be selected overall. Succeeding positions are in fewer matchups. So being in the first pair means being tested directly against every other pedal to be overall winner. First and second pedals have a rougher test - no drawing a bye into the semi-finals as in sports competitions.

Quote
I know your boss knows this and remember some of your amazing insights into this, so that doesnt seem like something that would be overlooked.
I've never either heard him say anything on the issue of second-in-the-chain being better, and never heard it mentioned by others. It is possible that he could have known it and never said anything over the ten-plus years I've known him. But I'm clueless about my own amazing insights into this. Can you point me to some of that? I apologize if I'm seeming dense or deliberately coy - I'm not. Maybe I don't understand what you mean by "knows this" (i.e. what "this" is, other than being second in line.)
QuoteRandomization of that experimenting parameter would help the results, as would not letting the riffmaster see which pedal he was using, which were well within his peripheral sight...that is clear from watching the video.
OK, I'll go for that, as I've already said. All I can say is that he was instructed not to shade the results and that I was watching for that. This is one of those things that was not made impossible by the design of the experiment, other than trying to play fair, which was happening; of course, someone (this is not accusative, I do not mean this as an ad hominem attack) who did not like the results will not be able to accept the "playing fair" thing. It wasn't perfect, as I've said before - just all we could do within the time and money available.

QuoteAlso a show of hands is not very scientific either...you see guys looking around for affirmation before they vote...how about filling out some surveys and then tallying at the end...and how can you extrapolate winning results if you are going head to head one at a time?
Agreed. Audience-to-audience interaction is an issue. A perfect test would have done the entire mess one person in the studio at a time, as I've already said. Beyond that, sixty people is not a very large sample size, Mark's notes about the advisor, student, and girlfriend aside. That's not perfect either.
This can only be taken as an indicator. However, the resulting head-to-head looking was fairly done, because the audience could look from one to the other for all pedals equally while voting.  It could have worked against any pedal as well as for it. It's a confounding issue (as they say in the design-of-experiments biz) instead of evidence of favoritism.

Quotedid he just do visual vs. A, visual vs. B, etc and declare Visual the winner if it beat the majority of them in a head to head or did it only have to beat one pedal head to head?
It was single elimination. Two pedals of unknown brand are compared, a vote is taken. The loser is eliminated. Next pedal is compared to the winner of the first comparison. The nominal "best" pedal is compared to all of the others with at most one level of indirection. So if the first or second pedal tried was eventually the best, it was compared against every single pedal. If the third pedal was eventually best, it was compared to all the other pedals and only indirectly to the loser of the first pairing. Basically, a pedal only gets judged second best once. This is a valid way of picking "best" (presuming the method of judging one versus the next is accepted as valid), but it does not produce a one to N ranking, and does not produce an N by N matrix of every-to-every comparisons. In particular, no, it was not Visual vs A, Visual vs B, etc. The Visual Sound pedal also would be eliminated on any single loss.
Quote
He even double checks one of the results when it appears the Visual pedal might have barely lost...man this is psychology 101.
Can you point out which section of video that is? I'd like to review it.

Several votes were close - which indicates only that the performance of the pedals was close, equal numbers of people liking one versus the other better, and that abstention was allowed. I'd like to see whether Bob's talk to the audience about ties was on the video there. Bob did explain just this issue, that in fact there may be ties, but that we did, for purposes of eventually selecting one as best, need to pick one, and that where that happens, he'd pester for abstainers to weigh in.

There's one other issue of preconceived results going on here. You say "He even double checks one of the results when it appears the Visual pedal might have barely lost". I'd like to see (a) if it is obvious to the audience that a Visual Sound pedal was in contention when the results were double checked (b) if the Visual Sound pedal was in fact the purported loser before the recheck and (c )whether this was the only instance of a re-check. Your implicit assumption in saying this is that Bob is manipulating the results by recalling votes until the audience gets it the way he wants it. As you say, Psychology 101. Assumptions like that need testing.

You don't know Bob, so I'm sure your skepticism is based on other owners of effects companies. And that's a reasonable basis for this. However, if you did know Bob, and had participated in the run up to this, you'd know that Bob would rather just write off the money to do the test than manipulate the results. But you have only my word for that, and that's suspect too since I work for Visual Sound. So once again, I'm down to - I was there, it was done as fairly as we could, within the constraints.

You don't mention the most straightforward version of faking the results - running the tests and simply declaring the winner to be who you want. A truly unscrupulous person would just do that. Why fool with letting the guitarist favor one pedal by playing better for some pedals versus others, place all your pedals second in line, or re-call votes until you get what you like? Faking is easy. Doing it honestly is hard.

QuoteLastly, is the one that beat Visual in any given test the overall winner for that category?
No, see the discussion of the testing method. The Visual Sound pedal could have been eliminated at any test. There was no attempt to simply compare Visual Sound to each other pedal, then declare it the winner. The Visual Sound pedal, if it was really second in line every time (which I don't know, see above) would have to be compared to every other pedal directly to be the overall winner. A Visual Sound pedal would be eliminated the first time it lost a vote, just like any other pedal in the contest.

Quote
And the thing about 10,000, 0000 hits on the footswitch...come on people, this is a rating given when tested from a machine that taps the switch repeatedly in succesion in a completely "unhuman" way...those crapp blue toggle switches are rated for like a million latches, and i can guarantee you they would never last that long under actual playing conditions. I would focus more on the ease of replacement and less mechanical parts, which are both positives.
You're correct about the testing method. A machine did the test, I'd guess. As I said, I didn't do the testing of the switch. ALPS specifies that in their product literature. Do I expect 10M operations? Of course not. Do I expect it would last ONE million from a switch rated for ten million, if anyone ever used a pedal that much? Yes. I think that's a valid engineering and human factors thing to do. Do you agree?

One million operations is one operation per second for 11.6 days, working 24 hours a day. Or, for a four hour gig, pressing it twice per three minute song, 6250 gigs, five a week (you're on tour) gives 25 years of five gigs a week for 50 weeks a year. That is, much more than anyone including professional touring musicians will ever give the pedal.

On the toggle switches. Those are indeed rated for much fewer operations, largely because toggle switches rated for 10M are simply not available in sizes that fit. However, I suspect (- but cannot prove  :icon_lol:  ) that no player ever flips the toggle switches twice per song for every song in a gig. Of course, we don't claim the toggles are rated for ten million operations, either.

We did try to make the mechanical plunger simple and easy to replace, as well as non-critical. The actual electrical switch, if it ever needs replacement, is currently $1.13 in ones from Mouser (and is a stocked Mouser part) and can be replaced by just soldering a new one into the small PCB which holds the tactile switch.

Thanks - making that cheap and easy to replace was deliberate.
Quote
I am very critical when it comes to these type of things (ala my degree).
As you should be - see my comments on skepticism above.

QuoteWhy not do it right the first time?
As I addressed - time and money. Would you like to participate in funding the perfect test?

QuoteThis wouldnt even pass the very basic criteria for a valid scientific experiment with statistically significant results...
No such claim was made, was it? In point of fact, I believe I've done a lot of self evaluation already in explaining that there were several areas where it could have been done better in the sense of a perfectly designed experiment.

Statistically valid is yet another criteria. That's an easy thing to toss in, and I warned Bob ahead of time that we'd hear about sixty people watching and voting not being "statistically valid". People hear "statistically valid" and think "yeah, yeah, statistically valid" but they have no particular concept of the things that go into making a statistically valid test. I'll leave it at "statistically valid" being a much, much bigger test than was done. We did decide that some information was better than no information, which was the other alternative.

Quotei hate to be the total %^&*ah here but let's call a spade a spade.
As a personal observation, I find that the words "i hate to be..." can often be interpreted as "it gives me no displeasure to announce that ..."  :icon_lol:

QuoteThis is an informercial for Visual with all due respect to RG. We have all seen this type of "blind" experiment with soda, food dehydrators, magic cleaners, leg hair removing wax, you name it.  It's a ploy to sell pedals while simultaneously giving a "shout out" to the very pedals that were the inspiration for the Visual pedals in the first place. Obviously the CEO is a decent man, but it's equally obvious he's a business man too.
And this gets right down to the issue of belief. If you believe that it's all smoke and mirrors, you're going to disbelieve the results, no matter what the test, test methodology, etc. And that's OK. Believe what you want. Of course we're happy to see that we did well. (As I've already stated here... )Would we make this public if we came out on the bottom of this? Duhh...

On the other hand, did we fake the results, either directly (by simply announcing winners irrespective of the voting) or by subtly and cleverly manipulating the pedals, the audience, the guitarist, the playing, by maladjusting the competitors ahead of time (hmmm, one you missed!), by modify the competitors ahead of time (there's another), by having a "sucks" button that Bob could press with his toe on the floor (ayie! another), by modifying half the chairs in the audience to vibrate when the "right" pedal was in play, or by any of the several million other ways it could have been fudged? No. That can be believed or not.

I would not submit this to a peer reviewed scientific journal (hmmm... the Journal of Scientific Sound Evaluation?) as a perfectly designed experiment. Is it pure and utter balderdash and selfserving lala like some audio and guitar advertising. It is not. But that's an item of belief or not. I have to tell you that some research I've heard about in peer reviewed scholarly journals as the absolute, well-designed-experiment, statistically-significant results have turned out after a short while to have simply been made up. Whatcha gonna do when the guys in the lab coats fake their data?
Quote
But it definitely did teach me how similar some of these pedals sound which was very cool. For that alone it was valuable. Especially the overdrives. Also, I have played Visual pedals and am impressed with the way they sound, which is no surprise. Thanks for the link. Cool post nonetheless.
Nonetheless.  :icon_biggrin:
We expected most of this kind of evaluation of the effort, and some that you didn't mention as well.
In all seriousness, I would welcome your help in designing an unassailable test methodology for pedal sounds. 
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

tiges_ tendres

I think this was a great test.  And whilst I listen to all the complaints about people saying: "It's not a fair test" blah blah blah.  I cant help but think that a lot of people missed the purpose of the test.  Which seems to be to debunk a few common myths and show the world the quality of product that Visual Sound produces.  The test is even set up to have Visual Sound products potentially fail against their counterpart pedals!  I dont see many if any companies doing that

A lot of you guys sound like you want to file a report against Visual Sound with the Stompbox Police.   Visual Sound was not obligated to provide a fair test, they werent even obligated to provide a test.  But they did, and very bravely I might add. 

We can all sit here and argue about pedal placement in the chain, cognitive studies etc.  But I think that would really be overkill for a test on this scale.
Try a little tenderness.

WGTP

Great stuff.  Need more of it.  It's easy to poke holes in any testing procedure unless the budget is nearly limitless.  I already thought most overdrives and distortions sounded the same, EXCEPT FOR THE EQ.  That is probably the easiest difference for the ears to hear between devices that have gone thru a recording chain.  Even sitting with a breadboard on my amp and switching out op amps is hard to hear a difference, although you can.  Switching SI diodes to LED's or Ge's is easy to hear.  Changing the treble roll off cap is easy to hear as is the bass roll off cap.  Adding a notch filter and varying the notch is easy to hear, or messing with a BMP tone control.  Things like "feel" sustain, asymmetrical distortion, etc. are much more subtle and harder to pick up with a mic and reproduce with electro mechanical speakers.  You can make a Dist+ or Tube Screamer into a 100 different pedals just by messing with the caps that contribute to the EQ and/or the clipping diodes.  Way more AUDIBLE than the op amp, buffers, etc.   :icon_cool:
Stomping Out Sparks & Flames

Andi

I've only seen the op-amp one so far, and greatly enjoyed it. Bob (?) did seem to be cheerleading a little - understandable given that the audience seemed to be flagging a bit.

An excellent video though - if money ever permits it I'd love to see more. I must remember to see the overdrive one.

Mark Hammer

Just a couple of technical points.

1) The order of presentation CAN matter.  If I "like" #2 more than #1 (or #1 more than #2), when the one I "like more" now becomes identified as #1 for the next comparison, there can be a bias to hear the one I "know I liked" as better.  Again, not a harsh criticism, just noting unintended factors that can influence a sequential comparison such as that used.

2) When psychophysicists (i.e., the student, girlfriend and advisor) do their comparisons, holistic "liking" judgments are not used that often.  More commonly, the perceiver/rater is asked to rate which of two stimuli is "more" of some particular dimension.  That dimension need not be an objective one, but it's better if it is a single one.  So, hearing two pedals and asked to provide a blind rating of which is "smoother" is reasonable.  And while I normally wince at the mention, I'd even accept "Which has better note definition?".  I have to find the article again, but there was a paper I dfound online from a guy who was a McGill music prof for a bit, and another guy from Japan, doing "semantic differential" ( http://en.wikipedia.org/wiki/Semantic_differential ) ratings of distortion pedals.

R.G.

Quote from: Mark Hammer on February 24, 2009, 04:31:20 PM
Just a couple of technical points.

1) The order of presentation CAN matter.  If I "like" #2 more than #1 (or #1 more than #2), when the one I "like more" now becomes identified as #1 for the next comparison, there can be a bias to hear the one I "know I liked" as better.  Again, not a harsh criticism, just noting unintended factors that can influence a sequential comparison such as that used.
Makes sense. Always presenting the preferred selection as #1 in #1 or #2 choices would be obvious; noting a preference for #2 and always presenting #2 as the same unit would be slightly more devious.  :icon_eek:

And I need to note that I don't know whether Bob always kept the previously selected winner as its previous number (#1 or #2) or whether it became the new #1 (or #2 either), or if it was random. I suspect that random would be a more valid test.

Quote
2) When psychophysicists (i.e., the student, girlfriend and advisor) do their comparisons, holistic "liking" judgments are not used that often.  More commonly, the perceiver/rater is asked to rate which of two stimuli is "more" of some particular dimension.  That dimension need not be an objective one, but it's better if it is a single one.  So, hearing two pedals and asked to provide a blind rating of which is "smoother" is reasonable.  And while I normally wince at the mention, I'd even accept "Which has better note definition?".  I have to find the article again, but there was a paper I dfound online from a guy who was a McGill music prof for a bit, and another guy from Japan, doing "semantic differential" ( http://en.wikipedia.org/wiki/Semantic_differential ) ratings of distortion pedals.
That makes sense as well.

(I grinned to myself as I started to type this, knowing that you knew that I'd be typing it. :icon_biggrin: ) Could you help me suggest some suitable dimensions? I can think of a few, but just coming up with sensible dimensions is a tricky thing. For instance, "harshness" and "shrillness" are kind of close to "brilliantness" and "clarity", the difference being primarily the consonance or discordance of the harmonic content, maybe. Asking which is shriller versus asking which one has the most brilliant, clear tone might be one subtle way to influence outcomes as well. Maybe this has to be done like the telephone polling stuff without which a USA president can't make any decision; ask the same thing several slightly different ways to try to size up the response. Kind of like asking in a poll whether the pollee thought George W. Bush was a threat or a menace.  :icon_lol:

I would like to collect up these kind of things for inclusion in any further such testing. This kind of thing is a way we can all contribute to the formal state of the art in the effects biz, I think. Coming up with good practice for testing is a real step forward. Even better is coming up with some idea what you lose when it's less that perfect - like having all the participants (subjects? victims?  :icon_eek:) see the presentations at the same time rather than individually, and some means of doing private voting that's possible without collecting and counting paper ballots, etc. Or whether perfection is all or nothing for such tests.

Youse guys with some psych training, help me out here. What's good, what's better, what's perfect, and where is it that we have to get or it's worthless to try?
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

analogmike

Could you help me suggest some suitable dimensions?

"which pedal has more haunting mids?"
DIY has unpleasant realities, such as that an operating soldering iron has two ends differing markedly in the degree of comfort with which they can be grasped. - J. Smith

mike  ~^v^~ aNaLoG.MaN ~^v^~   vintage guitar effects

http://www.analogman.com

liddokun

Great videos. I was very impressed by the son of hyde floor noise.
To those about to rock, we salute you.

R.G.

Quote from: analogmike on February 24, 2009, 04:59:19 PM
Could you help me suggest some suitable dimensions?
"which pedal has more haunting mids?"
Haunting mids.!?

Kewl. I'll write that one down.  :icon_biggrin:
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

frokost

Quote from: R.G. on February 24, 2009, 04:55:41 PM
Youse guys with some psych training, help me out here. What's good, what's better, what's perfect, and where is it that we have to get or it's worthless to try?

It's worthless to try. But still worth it. I enjoyed the videos very much, because they show a lot of pedals under the same conditions. It gives a rare opportunity to hear the difference between some pedals under certain cicrumstances, but of course not all. And they also show that people like different things, to repeat myself. And to elaborate - you can do these kinds of tests and make sure they follow scientific standards all you want - different people will still like different things. That's the whole point of music. Some pedals are clearly better than others, but you simply can't extend that into saying that "this pedal is the best", which I'm sure you know.

It all comes down to taste. What suits people. And no such test can prove anything in that matter. In my opinion, even trying to apply the same scientific laws that the technology of the pedals follow, to judging the quality of the sounds they produce is a faulty strategy.

frank_p



Note that a preliminary session like this for gathering information, put more questions on the table, discussing what are valuable questions, how to make a better test, etc. etc. is a valuable experience.  This is an engineering trial and not a "as scientific as possible" one (I think).  Doing an experiment like this is good for beginning to identify what could be issues that should be investigated with closer attention.  This test could not be scientific because it is not done by an independent examiner.  That told, it is not that it does not have any value.  On the contrary, by filming, taking notes, identifying problematic situations, the examiners can beguine to identify what are the needs of guitarists.  In other words it can be seen as a tool for gathering conceptual inputs.  So, if Visual Sound might want to do a survey or use an other tool for orienting their conception methodology, there will be some base material to construct on.  It is a necessity for a company to produce documentation, select and archive them to guide the designers work.  Some examples: they could, in the future, do: some interviews with musicians, doing some focus groups with different group of persons that are related in a way with the stompbox industry, market studies, etc.  All this to identify what are different needs and way to furfill them conveniently (base needs, performance needs, and innovation needs, etc.).

These informations, at a moment or an other will have to be analysed (and this is not necessarily a scientific process) with different tools, (graphical, tables, statistics ,discussions, etc.) as to put more structure in all the knowledge that has been gathered.  All qualitative and quantitative inputs are to be considered at first because they all might be points that will have to be considered in the overall quality of the product (and also it's quality image).  Example: If to put a X mythical chip is an option to be considered even if there is no scientific truth that it is better, the option should be considered, because that is what a lot of clients « want » and thus it is a selling point even if the company knows that there are perhaps no big differences comparing to other options.  But the company will know what their decision will be founded on, it will not be an arbitrary choice.

What I am not sure is the way that the shoutout was presented, I am not sure if it is a good idea to collect information, putting out exposure for the product and selling the merits of the company products at the same time.  I really wonder if there are not some pervert situations that might pop-up.  Example: when I watched the clips I did not like to see the boss making publicity for it's products at the end of the clips, while at the same time saying that he was not favoritizing Visual Sound's stuff.  There is nothing contradictory in that way of doing it, but it produce a suspicious feeling that can lead to bad thinking.  Also, the fact that it is not a scientific trial but a "public" shoutout done by the manufacturer: I am not convinced if it is a really good idea.  All this is all linked to the way we perceived the clips and what are our beliefs; nothing to do with what were the real intentions of the ones who have done the experiment.  Insisting to show that we are good guys will not always produce the desired results.  What most people believe is that the industry is there to make some money, not to reveal the truth.  If they are willing to reveal some truth it is that they are willing to make some money with it.  And there are a lot of manufacturers that give false truth.  So doubt will always be there somehow.  That is why I am not sure (in the real sense of it, not the "pejorative" sense) of the effect of those clips on the "general market".

The final thoughts when I watched it was : There seems to be a lot of goals here, what is the « real » conclusion or purpose of this trial finally.  What am I in front of.  It's obviously related with some commercial will...  These guys are playing two roles at the same time.  Can it really be sincere ?

My view is that some stuff should not be there in the clips.  I think that what the boss of Visual Sound is putting out at the end of the clips is a bit « too much ».  It doesn't add to the credibility of the company (or the clips).  Things like how silent the stompboxes are, should be in an other videos.  I have the impression that those videos are provoking mixed feelings that could have been avoided.  I also have the feeling that because we all know R.G. (and it is him who is presenting the experiment on DIYstompboxes), we tend to forget those points and because we know that he is a helpful guy and we are accustomed to his sincerity, we see the clips with an other eye than those who go directly on Youtube to see them.

Hope my comments will be of some help (and from my perspective I liked them, of course...)


R.G.

@frank_p
Thoughtful and reasonable commentary.
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

petemoore

A perfect test would have done the entire mess one person in the studio at a time
  Or a buzzer for each participant to keep individual responses secret to eliminate 'crowd voting'.
  Of course then it isn't 'readable responses' to the camera either. Raising of hands broadcasts responses completely transparently in a way that is immediately obvious as unadulteratable to the viewer.
 
Convention creates following, following creates convention.

Mark Hammer

When I was in grad school, I got interested in the chemical senses: smell and taste.  One of the reasons why we tend to know much more about hearing and vision than about smell and taste is because sounds and sights disappear as soon as you stop presenting them, so you can keep presenting another one and another and another.  In contrast, smells and tastes get lodged in the very organs used to sense them.  Little molecules of flavour find their way into those little potholes known as tastebuds.  Aromatic molecules get stuck in the mucus membranes.  Which means that in the space of an hour, you could present a couple of tastes for comparison maybe 40-60 times.  Presenting colour images and asking people to press a switch as fast as they can to indicate if the two samples are same or different can be done with at least 10x as many comparisons accomplished in the same time period.  Long story short: you can get much more research accomplished in the same amount of time for sight and sound than you can for taste and smell.

For taste research, at least, it is a standard practice to not only wait a little while between stimuli, but also to use a "palate cleanser" of distilled water; the idea being that you want a taste to be experienced in comparison to nothing, and not in comparison to what's left over from the last thing in your mouth.  Note that tasting things involves swishing them around in your mouth (and I think that happens because one has to search out available "unoccupied" taste receptors to maximize the sensation).  While there is certainly no shortage of sound receptors in a pedal comparison, the sonic phenomenon being evaluated requires something analogous to "swishing the sound around".  In other words, the effect is not perceived immediately, like a splash of colour or a teaspoon of salt in a gallon of water, or a brief spray of perfume.  It is something that is perceived over time.  Presenting it over time (in our case, some 5-10 seconds or so of playing) starts to become analogous to the prolonged exposure that naturally occurs with tastes and smells, so maybe an "auditory palate cleanser" is also needed.

In our pedal context, it seems to me that the palate cleanser ought to be a clean tone.  The question is, how does it get used?  One way is certainly to have some clean playing in between pairs of pedals.  But should a palate cleanser be used between pedals up for comparison?  Perhaps, but perhaps it should be something different than 5 seconds of clean strumming.

Or maybe, there should be a palate cleanser but NO gap or a very brief clean period between any two pedals.  I'm sure RG could design some little PIC-controlled switching thing that would let Zac or whomever play away, Bob or whomever hits a switch to get it rolling, and the circuit systematically cycles between clean and some random order of A or B.  So, for example, A-clean-B-clean-B-clean-A-clean-A-clean-B-clean-A.  The person/s listening then decide whether there is an audible difference in some designated dimension between them.  If it's a room full of people, you either rent those "instant polling" gadgets my senior management is so fond of at organization-wide annual meetings, or else you spend a couple of bucks and make yourself a multiplexed set of yes/no switches (a momentary 3-position toggle that returns to centre-off is perfect) that can feed a parallel port on a laptop.  Nobody sees hands, and nobody has to count.  The auto-switcher/randomizer can either keep the device being auditioned a secret (for "Is there a difference or not?"), or else can enable an indicator LED for those instances where you want people to indicate whether A or B has more of the designated dimension.

Maybe I'm getting too complicated now. :icon_wink:

theblueark

#58
I've suddenly had an idea pop into my head for what can bring the test further, although the chances of it happening is close to nothing.

Get the manufacturers of each pedals or their selected representative involved.

1. Each Manufacturer/Representative sets their pedal settings to what they feel will be liked by the majority of players. Their "best" setting if you will. They are allowed to audition everything else to be used in the actual experiment to aid in this setting. That is to say, the guitarist, the guitar, his choice of pickup, the riff he'll be playing, the amp, etc. Of course, a neutral judge will need to set all their volume levels to be perceived the same. Alternatively a decibel meter could be used if we want to be more scientific.

We know each pedal is capable of a wide range of sounds. But some pedals have a "sweet spot" or a certain setting which it is famous for, or that the manufacturer is proud of. The analogy is a chef is also capable of a whole range of dishes but is only allowed to showcase one when it comes to competitive judging. He brings the best he can muster and that he feels the judges will like.

2. Each Manufacturer/Representative sets up their pedal in the way they feel it should be best placed. Basically: Does it need want a buffered pedal in front/behind it? Is it expecting a high/low impedance input? Let it be entirely up to them. If they feel the need for a buffer anywhere, let them place a standardized buffer where they choose.

This is to eliminate the "oh my pedal sounds best 2nd in a chain" or "my pedal is expecting an active guitar" theories or such. I would say an identical Boss pedal for the use of every buffer would be a fairly accurate method, Boss being one of the most commonly used effect.

3. Some switching method will have to be created where Guitar -> common cable -> manufacturer's chain -> common cable -> amp.

Likely this will involve a long true bypass switcher plus equal lengths of cables to each manufacturer's chain.

4. Because the switching method which will likely involve a true bypass switcher, we can now do a double blind test, where even the person doing the switching will not know which pedal is being switched in. The person setting up the chains obviously should not be involved in the execution, nor interact with anyone else in the experiment for the duration of the experiment.


That's for the general idea. The details I'm sure can be carefully thought about and learnt from the experience of the VS shootout, which I enjoyed greatly  :icon_biggrin:

FlyingZ

If they really wanted a true test they would put the guts in generic sealed enclosures with plain knobs and let the test group have them for a week.

I'm surprised anyone took that comparison seriously  :icon_confused: