Win 1000 gem codes in the “Is the Shuffler broken?!?” raffle!

variancekills June 12, 2021 7 min

As we head further into the offseason, there really isn’t anything interesting to do on MTGA in as far as competitive play is concerned. The next qualifier weekend is in August, for which I have already qualified and as such have no need to rank up in the months of June or July.

Thus, I thought this would be a good time to relax, unwind, and unload some 1k gem codes through some elaborate raffle contest! laughing For those who follow my content, you know that one of the things I am interested in is examining all the “interesting” conspiracy theories that some people keep coming up with regarding MTGA. For example, I’ve done a number of events called “Matchmaker Roulettes” where I see if theories about the matchmaker can happen to me live on stream. 

However, of all these conspiracy theories, perhaps none are as ubiquitous as those about the shuffler. There is not a day that goes by that I do not come across a “shuffler is broken” post somewhere. As such, I have chosen this to be the subject of this event. Together we will answer the question: “Is there reason to believe that the shuffler is broken?”

Theoretical Background

Definitions are necessary. What do we mean by broken? Well, with a fair shuffle, every ordering of your deck is equally likely. Obviously, there is no way for us to check this. As such, we will limit ourselves to the issue of the number of lands appearing in one’s opening hand prior to any mulligans made. In a fair shuffler, the distribution of this random variable should be hypergeometric, parameterized by the size of the deck, the number of lands in the deck, and the number of cards in the opening hand. As such, the probability of each possible number of lands in the opening hand for say, a 60-card deck with 24 lands, is exactly computed in the table below. Let’s call this table “Table Truth 1”

Table Truth 1

 Number of lands in the opening hand Probability
0 0.021614527
1 0.121041353
2 0.269414624
3 0.308704256
4 0.196448163
5 0.069334646
6 0.012546269
7 0.000896162

Thus, this means that if I use a deck with those specifications say, 100 times, and then repeat this process over and over again (having results from 100 trials each time) and take the average across repetitions, the average number of times that each of the possibilities occur out of 100 times would be as follows. Let’s call this table, “Table Truth 2”

Table Truth 2

 Number of lands in the opening hand Average number of times the outcome occurs out of 100 trials
0 2.161452726
1 12.10413527
2 26.94146236
3 30.87042563
4 19.64481631
5 6.933464579
6 1.254626924
7 0.089616209

Which, yes, is simply each probability multiplied by 100 trials.

Of course, if one does do this experiment a single time, the result will not be as above. For example, one result can be as shown in the following table. Let’s call this table, “Table Good.”

Table Good

 Number of lands in the opening hand Actual number of times the outcome occurs out of 100 trials
0 3
1 11
2 27
3 28
4 20
5 8
6 2
7 1

As you can see, the frequency values in Table Good are very close to those in Table Truth 2. In fact, we can use a test called a Chi-Square Goodness-of-fit test to check if there is sufficient statistical evidence that the data we gathered is being generated by a shuffler that does NOT follow at least one of the probabilities specified in Table Truth 1. Using this test on Table Good, we get a p-value of 0.159. I will skip the technicalities of what this value means (although anyone who wants to can consult any standard text on Elementary Statistics and should be able to satisfy their curiosity). For our purpose, if this p-value is less than 0.05, then we can claim that there is sufficient evidence that the true distribution from which our data is coming from is not Table Truth 1. However, as you can see, the p-value is greater than 0.05, which means that we do not have enough reason to believe that the shuffler is unfair in this aspect of lands in the opening hand.

Now, consider the following table, which we call "Table Bad."

Table Bad

 Number of lands in the opening hand Actual number of times the outcome occurs out of 100 trials
0 5
1 15
2 29
3 26
4 16
5 6
6 2
7 1

In Table Bad, I increased the times that 0 to 2 land hands appeared relative to the rest in across 100 repetitions. This difference may not be perceptively large just by eyeballing the data, but if we do the test here again, the p-value is 0.027 which is less than our 0.05 threshold. If I inflate the number of times that 6 and 7 lands appear as well, which is the common claim of the shuffler giving either too many lands or too few, the p-value will become even smaller.

It is in this context that we will hold our little raffle contest. Here is how it works:

What will I do?

Every day starting from Sunday morning, June 13, at about 7am EST, I will stream games on the best-of-three (Bo3) ranked ladder. I will always use a deck with 60 cards and 24 lands, and no double-faced cards that have a nonland face. I won’t really be interested in winning any games, although I might try if it suits my mood. What I will be interested in is taking a record of the number of lands in the opening hand before mulligans. There will be at least one to at most three of these hands per match since it is Bo3. I will do this, hopefully daily, until we get to 100 repetitions. Everything will be streamed live on Twitch and then recorded on Youtube.

After I reach 100 repetitions, I will tally up the frequencies and do the chi-square test. If the p-value is less than 0.05 then it means that YES, there is a problem with the shuffler. If the p-value is greater than 0.05, then this means that NO, we do not have enough evidence to claim that there is something wrong with the shuffler in as far as the context of our experiment is concerned.

What will you do?

What you can do in order to qualify for the raffle is simple. You will vote on whether you think that YES, there is something wrong with the shuffler or NO, there is nothing wrong with the shuffler in as far as the context of this experiment is concerned. That is, you are guessing if the p-value will be less than 0.05 (YES), or not (NO). Each of the following actions counts as one entry to the raffle.

1.)    Follow my Aetherhub profile (if you haven’t yet) and cast your vote in the comments below

2.)    Follow me on Twitch, watch at least one of my streams for this event, and cast your vote in the Twitch chat

Thus, each person can have up to two entries for this event.

How can you win?

After I finish the data gathering and conduct the test, I will randomly select the winners of the raffle from the pool that voted correctly in as far as the test results show. That is, if indeed we get a p-value less than 0.05, then I will choose the winners from those who voted YES, the shuffler is broken and if the p-value is not less than 0.05 then I will choose from among those who voted NO, the shuffler is not broken.

What are the prizes?

The prizes will be 1000 gem codes. The number of codes given away will depend on the number of total entries from Twitch and Aetherhub. I will give away one 1k gem code for every 10 entries up to 10 1k gem codes if we reach 100 entries. Furthermore, if we pass 100 entries by any number, I will give away 15 codes instead of 10. Note that there is a limit of 10 1k gem codes redeemable per MTGA account. If you exceed this limit, the code will still be yours to give away or use in another account.

Disclaimer

We will be conducting a statistical test and it should be noted that no test is perfect. There is always the possibility that a test will give a false positive or a false negative, and we need another Stats lecture to explain how the 0.05 threshold that we set fits into this situation. However, we will be sticking to the results of the test conducted in as far as deciding on which pool of entries to pick the winners from.

Disclosure

All prizes sponsored by the Wizards Creator Program

Let's GOOOOO!

That’s all there is to it. I look forward to the outcome of this experiment…

and may the shuffler be with us all!

About variancekills:

Hi, I'm Mark. I've won exactly one World Magic Cup Qualifier, one Preliminary Pro Tour Qualifier, one Arena Open ($2k) one CFB Pro Showdown (April, 2021), and one Mana Traders Series (Oct, 2021) and I am looking to win more. I've played in almost every Mythic Championship Qualifier Weekend. Follow my FB page or my Twitch channel for no frills, competitive Magic. You won't see my face, but I won't hide my gameplay and deckchoice flaws. I play both MTGA and MTGO and stream most of the time when I do. I will lose often, and I will make mistakes, but I try my best to let you know when I do (and I think I will still win a lot more times than I lose).

I'm a dad and husband first, a statistician, teacher, and researcher second (I know those are 3 things but bear with me), a Magic player third, and a content creator only because I am a Magic player. 

So yeah, let's play some Magic and may the shuffler be with us all.

FB page: https://www.facebook.com/deathbyvariance

Comments

Login to comment

31 comments

variancekills
No problem. I should be fair to admit though that without testing, there isn't much to the claim. It's just "a feeling."
AritzNeo
@variancekills I sincerely appreciate the offer, but indeed it would require effort and I'm afraid I do not have the time to perform such an experiment. From my perspective, if winning does affect, then it can be either the winning rate of the account or for each deck separately. However, testing both approaches definitely would make the work ever more complex. Let's see if someone accepts the challenge.
variancekills
@AritzNeo
The odds of going 20 times on the draw may be very small, but with millions of games played on MTGA, every odd occurrence is bound to happen at some point. Testing if one is made to go on the draw more as one wins more is something that can be done fairly simply though, but it would require effort, as well as someone who can actually win consistently. If you are willing to undertake such a task, I can spare a 1k gem code for you to do it (live streamed just as I did this experiment).
AritzNeo
@variancekills thank you for the links, it is an amazing work from my perspective.

The problem with on-the-play/draw, is that when you look at the odds to go on the draw TWENTY times in a row supposing and unbiased coin toss (answer: 0,0001%), this happens rather not so rarely particularly during winning streaks. Maybe winning can be a factor and definitely not spending money (I agree that the correlation with money is almost unexistent as the R values you found are extremely low).
variancekills
@AritzNeo the results have been uploaded they can be found here:

https://www.youtube.com/watch?v=zpLUML75Kdo

The results show that the outcomes are within the statistical odds expected. There is nothing unusual. WotC has been forthcoming with what shuffling method they use and computer shuffling itself is a practically solved problem in computer science.

As for play/draw stats, I have also done work on that here:

https://www.youtube.com/watch?v=p6jWa85VBu4
AritzNeo
I would not say the shuffler is 'broken', if it working against statistical odds it is deliberately on purpose. Probably it is patent-protected, just as EA protects its handicap-and-scripting-related stuff of FIFA Ultimate Team and many more 'surprise mechanics'. I have not found the result of this experiment so I cannot elaborate more on it.
However, an additional experiment that you can perform it to test on the play / on the draw odds, because IMO they are clearly handicapped
variancekills
Thanks again to all who participated. I have presented the results and drawn the winners. You can find a link to the video on Aetherhub. The winners are:
RynzMTG
WhatSmada
Melasos

Kindly provide me with an email or Facebook profile that I can message to send your prize. Congratulations!
Rdesmarais2
What's the odds with 17 lands out of 40 but in draft it hurts... Real bad.
Draft broken.
Constructed fine.
Drakmo
Shuffler is fine :-)
variancekills
Thank you all for joining. We are done with data gathering and entries for the event are closed. We have a total of 24 entries so we will be giving away three 1k gem codes. I will do a video by the end of the week sharing the results of the experiment and drawing the winners.
show more comments
Search Articles

Enter The Battlefield Prepared

With the MTGA Assistant deck tracker MTGA Assistant