Death To All wrote:Ok,
There is no memory with the dice. Your arguments are flawed. Why? Let me explain.
The theory that the list gets stale, that the probability changes when a large percentage of the list is used up, would be vailid if this was a deck of cards. If you have only 2 cards left in the deck and never saw the Ace of spades and two of hearts, you have a 50% probability of each.
The list CC uses is composted of numbers taken out of thin air (literally). Lets say 499,998 dice rolls have been used and only 2 are left. Lets say that you were witness to all previous 499,998 rolls and wrote them down. Can you predict the probablily of the next two rolls? will it be different than 1/6? No, It will not. Each roll is not dependant on the previous. The last 2 rolls will be just as random as the first 499,998.
In essance, there's no difference between getting a 'live' random number out of thin air, or taking it from a list of previously 'live' random numbers out of the air. They are just as random.
I will add the caveat that if the list of 500,000 numbers is skewed, so will the probability of drawing a 1/6 dice roll. I would be shocked if it were statistically significantly skewed though, as 500,000 is more than ample a sample size.
This actually isn't quite right, because of the re-use of the file. So the file in effect becomes like a deck of cards. Obviously, if the list were just used once, the 499,999th and 500,000th lines would be random.
There is also a different argument supporting your case, which goes like this: assume instead of 2,500,000 dice, we had 500,000 coin flips, and the coin file happens to have 250,000 heads and 250,000 tails. Reuse of the coin file means that there is a total number of possible sequences of the 500,000 coins equal to 500,000 C 250,000, which is much less than the random, 2^500,000. However, assuming their randomization is otherwise good, none of the 500,000 C 250,000 sequences is any more likely than any other. Even if on the last 2 coin tosses, all of the heads have been used up and the last 2 must be tails, that is still sort of random *considering the whole sequence* because the entire sequence of 500,000 flips was randomly generated. It can be shown that the chance of the last 2 being tails *considering the whole sequence* is actually not 1/4, but 499,999/1,999,998.
Bottom line, assuming mechanical perfection and no programming bugs:
1. From the second use of the dice file on, the dice do have memory
2. The rolls depart from randomness
3. The size of the file helps mitigate the departure from randomness.
4. Tentative hypothesis is that the rolls should be *less* streaky at a macro level (I know this is different from what I said before, but had a chance to work out some of the math yesterday).