With all due respect to reverbie:
hmmmm....besides being skeptical of these type of commercial ads, i also come from the same school of thought as analogmike... after watching the youtube clips, it did appear that the visual pedal was second in every chain of pedals if counting from guitar. I could be wrong. Placement surely would have an impact on tone, specially since i saw it running directly after a buffered boss pedal, etc.
Skepticism is good. It is in fact skepticism that led to us doing this. But you are correct - we did have dogs in the hunt, we admit that up front, and we did a lot of backstage work to try to eliminate that as an issue.
You bring up an issue that may need tested, or at least eliminated by the design of a future experiment. By the way, as Mark can tell you, just the design of experiments is a technical specialty, all on its own. I have some familiarity with it, but am by no means a technical specialist in this field.Does placement in the chain of similar devices, on its own, change sound enough to change the results of such an experiment?
Clearly, I don't know, because (a) I didn't think of that ahead of time and (b) I don't know the order of the effects in the test for sure well enough. I think this issue is worthy of a test all its own, designed properly. Maybe another way to look at this is I can assure you that there was no attempt to nobble the test results by careful, crafty placement of Visual Sound pedals second in line every time (second in line being the magic, perfect place to be?!?).
As an interesting sidelight, the first or second position in the test is the toughest position to be in, because the winner of that matchup must be individually compared against every other pedal in the lineup to be selected overall. Succeeding positions are in fewer matchups. So being in the first pair means being tested directly against every other pedal to be overall winner. First and second pedals have a rougher test - no drawing a bye into the semi-finals as in sports competitions.
I know your boss knows this and remember some of your amazing insights into this, so that doesnt seem like something that would be overlooked.
I've never either heard him say anything on the issue of second-in-the-chain being better, and never heard it mentioned by others. It is possible that he could have known it and never said anything over the ten-plus years I've known him. But I'm clueless about my own amazing insights into this. Can you point me to some of that? I apologize if I'm seeming dense or deliberately coy - I'm not. Maybe I don't understand what you mean by "knows this" (i.e. what "this" is, other than being second in line.)
Randomization of that experimenting parameter would help the results, as would not letting the riffmaster see which pedal he was using, which were well within his peripheral sight...that is clear from watching the video.
OK, I'll go for that, as I've already said. All I can say is that he was instructed not to shade the results and that I was watching for that. This is one of those things that was not made impossible by the design of the experiment, other than trying to play fair, which was happening; of course, someone (this is not accusative, I do not mean this as an ad hominem attack) who did not like the results will not be able to accept the "playing fair" thing. It wasn't perfect, as I've said before - just all we could do within the time and money available.
Also a show of hands is not very scientific either...you see guys looking around for affirmation before they vote...how about filling out some surveys and then tallying at the end...and how can you extrapolate winning results if you are going head to head one at a time?
Agreed. Audience-to-audience interaction is an issue. A perfect test would have done the entire mess one person in the studio at a time, as I've already said. Beyond that, sixty people is not a very large sample size, Mark's notes about the advisor, student, and girlfriend aside. That's not perfect either.
This can only be taken as an indicator. However, the resulting head-to-head looking was fairly done, because the audience could look from one to the other for all pedals equally while voting. It could have worked against any pedal as well as for it. It's a confounding issue (as they say in the design-of-experiments biz) instead of evidence of favoritism.
did he just do visual vs. A, visual vs. B, etc and declare Visual the winner if it beat the majority of them in a head to head or did it only have to beat one pedal head to head?
It was single elimination. Two pedals of unknown brand are compared, a vote is taken. The loser is eliminated. Next pedal is compared to the winner of the first comparison. The nominal "best" pedal is compared to all of the others with at most one level of indirection. So if the first or second pedal tried was eventually the best, it was compared against every single pedal. If the third pedal was eventually best, it was compared to all the other pedals and only indirectly to the loser of the first pairing. Basically, a pedal only gets judged second best once. This is a valid way of picking "best" (presuming the method of judging one versus the next is accepted as valid), but it does not produce a one to N ranking, and does not produce an N by N matrix of every-to-every comparisons. In particular, no, it was not Visual vs A, Visual vs B, etc. The Visual Sound pedal also would be eliminated on any single loss.
He even double checks one of the results when it appears the Visual pedal might have barely lost...man this is psychology 101.
Can you point out which section of video that is? I'd like to review it.
Several votes were close - which indicates only that the performance of the pedals was close, equal numbers of people liking one versus the other better, and that abstention was allowed. I'd like to see whether Bob's talk to the audience about ties was on the video there. Bob did explain just this issue, that in fact there may be ties, but that we did, for purposes of eventually selecting one as best, need to pick one, and that where that happens, he'd pester for abstainers to weigh in.
There's one other issue of preconceived results going on here. You say "He even double checks one of the results when it appears the Visual pedal might have barely lost". I'd like to see (a) if it is obvious to the audience that a Visual Sound pedal was in contention when the results were double checked (b) if the Visual Sound pedal was in fact the purported loser before the recheck and (c )whether this was the only instance of a re-check. Your implicit assumption in saying this is that Bob is manipulating the results by recalling votes until the audience gets it the way he wants it. As you say, Psychology 101. Assumptions like that need testing.
You don't know Bob, so I'm sure your skepticism is based on other owners of effects companies. And that's a reasonable basis for this. However, if you did know Bob, and had participated in the run up to this, you'd know that Bob would rather just write off the money to do the test than manipulate the results. But you have only my word for that, and that's suspect too since I work for Visual Sound. So once again, I'm down to - I was there, it was done as fairly as we could, within the constraints.
You don't mention the most straightforward version of faking the results - running the tests and simply declaring the winner to be who you want. A truly unscrupulous person would just do that. Why fool with letting the guitarist favor one pedal by playing better for some pedals versus others, place all your pedals second in line, or re-call votes until you get what you like? Faking is easy. Doing it honestly is hard.
Lastly, is the one that beat Visual in any given test the overall winner for that category?
No, see the discussion of the testing method. The Visual Sound pedal could have been eliminated at any test. There was no attempt to simply compare Visual Sound to each other pedal, then declare it the winner. The Visual Sound pedal, if it was really second in line every time (which I don't know, see above) would have to be compared to every other pedal directly to be the overall winner. A Visual Sound pedal would be eliminated the first time it lost a vote, just like any other pedal in the contest.
And the thing about 10,000, 0000 hits on the footswitch...come on people, this is a rating given when tested from a machine that taps the switch repeatedly in succesion in a completely "unhuman" way...those crapp blue toggle switches are rated for like a million latches, and i can guarantee you they would never last that long under actual playing conditions. I would focus more on the ease of replacement and less mechanical parts, which are both positives.
You're correct about the testing method. A machine did the test, I'd guess. As I said, I didn't do the testing of the switch. ALPS specifies that in their product literature. Do I expect 10M operations? Of course not. Do I expect it would last ONE million from a switch rated for ten million, if anyone ever used a pedal that much? Yes. I think that's a valid engineering and human factors thing to do. Do you agree?
One million operations is one operation per second for 11.6 days, working 24 hours a day. Or, for a four hour gig, pressing it twice per three minute song, 6250 gigs, five a week (you're on tour) gives 25 years of five gigs a week for 50 weeks a year. That is, much more than anyone including professional touring musicians will ever give the pedal.
On the toggle switches. Those are indeed rated for much fewer operations, largely because toggle switches rated for 10M are simply not available in sizes that fit. However, I suspect (- but cannot prove
) that no player ever flips the toggle switches twice per song for every song in a gig. Of course, we don't claim the toggles are rated for ten million operations, either.
We did try to make the mechanical plunger simple and easy to replace, as well as non-critical. The actual electrical switch, if it ever needs replacement, is currently $1.13 in ones from Mouser (and is a stocked Mouser part) and can be replaced by just soldering a new one into the small PCB which holds the tactile switch.
Thanks - making that cheap and easy to replace was deliberate.
I am very critical when it comes to these type of things (ala my degree).
As you should be - see my comments on skepticism above.
Why not do it right the first time?
As I addressed - time and money. Would you like to participate in funding the perfect test?
This wouldnt even pass the very basic criteria for a valid scientific experiment with statistically significant results...
No such claim was made, was it? In point of fact, I believe I've done a lot of self evaluation already in explaining that there were several areas where it could have been done better in the sense of a perfectly designed experiment.
Statistically valid is yet another criteria. That's an easy thing to toss in, and I warned Bob ahead of time that we'd hear about sixty people watching and voting not being "statistically valid". People hear "statistically valid" and think "yeah, yeah, statistically valid" but they have no particular concept of the things that go into making a statistically valid test. I'll leave it at "statistically valid" being a much, much bigger test than was done. We did decide that some information was better than no information, which was the other alternative.
i hate to be the total %^&*ah here but let's call a spade a spade.
As a personal observation, I find that the words "i hate to be..." can often be interpreted as "it gives me no displeasure to announce that ..."
This is an informercial for Visual with all due respect to RG. We have all seen this type of "blind" experiment with soda, food dehydrators, magic cleaners, leg hair removing wax, you name it. It's a ploy to sell pedals while simultaneously giving a "shout out" to the very pedals that were the inspiration for the Visual pedals in the first place. Obviously the CEO is a decent man, but it's equally obvious he's a business man too.
And this gets right down to the issue of belief. If you believe that it's all smoke and mirrors, you're going to disbelieve the results, no matter what the test, test methodology, etc. And that's OK. Believe what you want. Of course we're happy to see that we did well. (As I've already stated here... )Would we make this public if we came out on the bottom of this? Duhh...
On the other hand, did we fake the results, either directly (by simply announcing winners irrespective of the voting) or by subtly and cleverly manipulating the pedals, the audience, the guitarist, the playing, by maladjusting the competitors ahead of time (hmmm, one you missed!), by modify the competitors ahead of time (there's another), by having a "sucks" button that Bob could press with his toe on the floor (ayie! another), by modifying half the chairs in the audience to vibrate when the "right" pedal was in play, or by any of the several million other ways it could have been fudged? No. That can be believed or not.
I would not submit this to a peer reviewed scientific journal (hmmm... the Journal of Scientific Sound Evaluation?) as a perfectly designed experiment. Is it pure and utter balderdash and selfserving lala like some audio and guitar advertising. It is not. But that's an item of belief or not. I have to tell you that some research I've heard about in peer reviewed scholarly journals as the absolute, well-designed-experiment, statistically-significant results have turned out after a short while to have simply been made up. Whatcha gonna do when the guys in the lab coats fake their data?
But it definitely did teach me how similar some of these pedals sound which was very cool. For that alone it was valuable. Especially the overdrives. Also, I have played Visual pedals and am impressed with the way they sound, which is no surprise. Thanks for the link. Cool post nonetheless.
We expected most of this kind of evaluation of the effort, and some that you didn't mention as well.
In all seriousness, I would welcome your help in designing an unassailable test methodology for pedal sounds.