The goal is to (dis)prove a concept by comparing permutations of decks of cards.
A deck contains a number of blue cards(b) and red cards(r). These can be any permutation of that amount of b and r. If it's a 3 card deck with two blue cards and one red, the possibilities would be:
bbr, brb, rbb
For phase 1, the fact that "b1" and "b2" could be in either position is irrelevant. Those permutations are the same, by comparison.
Therefore,
A 9 card deck with 4 blue cards and 5 red cards would have outcome X.
A 10 card deck with 4 blue cards and 6 red cards would have outcome Y.
If you start with the above 4b/6r deck and randomly remove 1 red card, you will have outcome Z.
The possible permutations of outcomes X and Y are, of course, different. The outcomes of X and Z should be equivalent. (I believe.)
Phase 2 would now consider the probabilities of each permutation. So, for outcomes X and Z, we want to know if the odds of each permutation are the same between X and Z. (I believe they are, again.)
My bit of searching on my own has led me to believe I want to be using itertools, but I am having trouble putting that into the context that I need it for this exercise.
Related
In a list of 300, I need to position 50 items, all repeated 6 times (300 total) in such a way that each item is within a certain range, and the average position of the item in the list is around the middle (150).
By each item in a certain range, I mean the 8 smaller subsets which are positions like:
1-36, 37-73, 74-110, 111-148, 149-186, 187-225, 226-262, 263-300. So, for example, item 1 could have positions 1, 38, 158, 198, 238, 271 in the list, with an average position of 150.6.
I'm trying to do this to automate a currently manual and time consuming process, but I'm having trouble figuring out the algorithm. My current thinking is for each item, randomly position the item into each segment, ensuring that if I choose the minimum position for each subsequent segment, the average cannot be higher than 150(+-2), if it is, randomize the previous position again until a number works. But thinking about it, it seems like it may not work and probably won't be fast. I'd really appreciate any help with this
(coding in Python if it matters)
EDIT:
to clarify, I am trying to position these items randomly, so for example, item1 would not appear 1st in all the subsets (I know that wouldn't make an avg of 150, just for clarification sake). In the example I supplied, item 1 would appear first in the first subset, second in the second subset and 9th in the 3rd. This is actually where I am having trouble
This is straightforward by construction. Let's refer to your 8 slices (subsets) in four pairs. Note that I've corrected the arithmetic on the slice boundaries.
A 1-38 , 264-300 37 slots first & last
B 39-75 , 226-263 38 slots
C 76-113, 188-225 38 slots
D 113-150, 151-187 37 slots middle pair
More specifically, we will pair there in reverse, mapping locations 1-300, 2-299, 3-298, etc. Those pairs of elements will receive the same value from the list of 50 items.
Now, we need sets of 6 slices in 3 pairs, distributed evenly. Each of these sets will omit one of our pairs above:
A B C items 1-12
A B D items 13-24
A C D items 25-36
B C D items 37-48
Since we allocate these in strict pairs, we will now have a mean of exactly 150.5 for each of the 48 objects, the optimum solution. Were the quantity of items divisible by 4, we could finish the allocation trivially. However ...
We now have items 49 & 50 remaining, 12 items. Slices A & D have 2 pairs open; B & C have 4 pairs open. We allocate these to sets ABC and BCD, finishing the construction.
Every item is allocated to 6 different slices, and has a mean position of 150.5, the mean of the entire collection of 300.
Response to OP comment
I never said they were to be placed in order of item number. Go ahead and do it that way, but only for the lower half of the slices (1-150).
Now, shuffle each of those partitions. Finally, make the upper half the mirror-image of the lower half. Problem solved -- maybe, depending on your definition of "random". The first half has high entropy, but the second half is entirely deterministic, given the first half.
Okay so, I'm onto the next step after dealing the cards to two players.
I need the program to be able to take the player's desired cards it wants to get rid of and exchange them for new random cards. The player will be questioned how many and which cards it wants to exchange. The code should be something like if the player inputs '1' for one throwaway card and then the player has the option to select which card to remove. So that card will then be removed from the hand or list in the code and replaced with 1 new one. This only happens once and then it should print both players' hands.
Every where I look, it's done in a more complicated way and I know it's simple coding but I really do suck at the most simplest things.
What I've got so far:
def poker():
import random
(raw_input('Welcome to a classic game of Poker! You will recieve 5 cards. You will have the option to exchange 1 to 3 cards from your hand for new cards of the same amount you exchanged. IF you have an Ace in your beginning hand, you may exchange that Ace for up to four new cards (three other cards including the ace). ~Press Enter~'))
(raw_input('S = Spades , H = Hearts , C = Clubs , D = Diamonds ~Press Enter~'))
deck = ['2S','2H','2C','2D','3S','3H','3C','3D','4S','4H','4C','4D','5S','5H','5C','5D','6S','6H','6C','6D','7S','7H','7C','7D','8S','8H','8C','8D','9S','9H','9C','9D','10S','10H','10C','10D','Jack(S)','Jack(H)','Jack(C)','Jack(D)','Queen(S)','Queen(H)','Queen(C)','Queen(D)','King(S)','King(H)','King(C)','King(D)', 'Ace(S)','Ace(H)','Ace(C)','Ace(D)']
new_cards = ''
player1 = []
player2 = []
random.shuffle(deck)
for i in range(5): player1.append(deck.pop(0)) and player2.append(deck.pop(0))
print player1
int(input('How many cards would you like to exchange? 1, 2, 3, or 4 IF you have an Ace.'))
#ignore this for now
int(input('Which card would you like to exchange? 1, 2, 3, 4, or 5? Note: The first card in your hand (or list in this case) is the number 1 spot. So if you want to exchange the first card, input 1. The same is for the other cards.'))
The card that was exchanged in the beginning hand also can't be accessible from the deck list after swapping. So like... ['8D','2S','Queen(H),'8S','Jack(H)']
If I wanted to remove 1 card, I choose to remove '2S', '2S' will no longer be in my hand and will be swapped out with a different card from the deck. '2S' will also not return to my hand for any reason because it can't be taken from the list a second time. So the output should be all the same cards EXCEPT the '2S' will be missing and a new card will be in it's place.
There is the standard removing up to 3 cards at once but you can also remove up to 4 cards IF you have an Ace in your beginning hand. But you should be rejected and then asked once more how many cards you want to get rid of if you don't provide an Ace to the question.
What could work is the following :
n_cards_to_exchange = int(input('How many cards would you like to exchange? 1, 2, 3, or 4 IF you have an Ace.'))
for i in range(n_cards_to_exchange):
print(player1)
card_text = ', '.join([str(j) for j in range(1,5-i)]) + f', or {5-i}?'
card_id = int(input(f'Which card would you like to exchange? {card_text} Note: The first card in your hand (or list in this case) is the number 1 spot. So if you want to exchange the first card, input 1. The same is for the other cards.')) - 1
deck.append(player1.pop(card_id))
random.shuffle(deck)
for i in range(n_cards_to_exchange):
player1.append(deck.pop(0))
The idea is that the player chooses the number of cards he wants to drop, and then chooses which cards he want to drop multiple times. Then he draws back cards from the deck. If you need any clarification, feel free to ask.
BACKGROUND: Today I thought I'd begin a small project building a poker
simulator. The first task I set was to deal cards from a shuffled
deck, and check the various numerically generated probabilities
against accepted values. The first such probability I checked was
the single pair probability--that is, generating (numerically) the
probability of being dealt a single pair, given as inputs the number
of cards dealt and the number of hands dealt, where each hand
is dealt from a separate shuffled deck. Cards are dealt from the
top of the deck. Below I show the beginning of that program.
I first tested numerically generated single pair probability
for five card hands. The computed value comes to within
a tenth of a percent of the accepted single pair probability
for five card hands (but always high by about a tenth of a percent): https://en.wikipedia.org/wiki/Poker_probability
However, when I test the numerically generated single pair probability
for seven card hands, I find that I am off by 4% to 5% from the
accepted value (e.g., typical computed value = 0.47828; accepted value as per above = 0.438). I've run the numerical experiments up to ten
million hands dealt. The computed single pair probability for seven
card hands is stable, and remains off by 4% to 5% from the accepted value. It's not clear
why this is the case.
QUESTION: Why is this the case? I suspect that my code is not taking
something into account, but I cannot detect what. Python code follows . . .
NOTE: Issue 31381901 is similar to this one. But in the below code the issue of double counting is accounted for by converting the dealt hand to a set, which will eliminate duplicate values, thus reducing the size of the set (in the case of 7 card hands) from 7 to 6. That reduction indicates a single pair. If three-of-a-kind is present, then the size of the set would be 5, since two of the three cards in the three-of-a-kind would be eliminated by the set conversion.
from random import shuffle
def make_deck():
'''Make a 52 card deck of cards. First symbol
is the value, second symbol is the suit. Concatenate
both symbols together.
Input: None
Output: List
'''
value = ['A', '2', '3', '4', '5', '6', '7', '8', '9', 'T', 'J', 'Q', 'K']
suit = ['C','H','S','D']
deck = [j+i for j in value for i in suit]
return deck
def shuffle_deck(deck, times_to_shuffle=7):
'''Shuffle a deck of cards produced by make_deck().
Default: 7 times.
Input: list, int
Output: list (modified in-place)
'''
for n in range(times_to_shuffle):
shuffle(deck)
def test_for_single_pair(hand, cards_per_hand):
'''Tests for presence of a single pair in
a dealt hand by converting the hand to a set.
The set representation of a hand with a single
pair will have one less member than the original
hand.
Input: list, int
Output: int
'''
hand_values_lst = [card[0] for card in hand]
hand_values_set = set(hand_values_lst)
set_size = len(hand_values_set)
if set_size == (cards_per_hand - 1):
return 1
else:
return 0
def deal_series_of_hands(num_hands,cards_per_hand):
'''Deals a series of hands of cards and tests
for single pairs in each hand. Creates a deck
of 52 cards, then begins dealing loop. Shuffles
deck thoroughly after each hand is dealt.
Captures a list of the dealt hands that conform
to the spec (i.e., that contain one pair each),
for later debugging purposes
Input: int, int
Output: int, int, list
'''
deck = make_deck()
single_pair_count = 0
hand_capture = []
for m in range(num_hands):
shuffle_deck(deck)
hand = deck[0:cards_per_hand] #first cards dealt from the deck
pair_count = test_for_single_pair(hand, cards_per_hand)
if pair_count == 1:
single_pair_count += pair_count
hand_capture.append(hand)
return (single_pair_count, num_hands, hand_capture)
cards_per_hand = 7 #User input parameter
num_hands = 50000 #user input parameter
single_pair_count, num_hands_dealt, hand_capture = deal_series_of_hands(num_hands, cards_per_hand)
single_pair_probability = single_pair_count/ num_hands_dealt
single_pair_str = 'Single pair probability (%d card deal; poker hands): '%(cards_per_hand)
print(single_pair_str, single_pair_probability)
If the hand contains a single pair but also contains a higher-value unit such as a straight or a flush, your code still counts that as a pair, where the probability article does not.
I'm curious about how this works out mathematically, but I'm not smart enough to figure it out myself. (I tried)
If you generate a list of 1000 psuedo-random numbers, such as:
random_numbers = []
for i in range(1,1000):
random_numbers.append(random.randrange(1000,9999))
Then generate another psuedo-random number to use as an index for the list:
final_value = random_numbers[rand.randrange(1,1000)]
Intuitively, this seems like it would be more random than simply generating 1 psuedo-random value like this:
number = random.randrage(1000,9999)
However, I know there's often a lot of gotchas with randomness so I figured I'd ask you guys.
Funny enough, it's the same!
While it seems intuitive that you'd end up with a more random number because you're "adding more randomness" to your pick by running the random number generator repeatedly - because randrange approximates a uniform distribution your sample space ends up being identical between the two options. Let's take a look at a simpler example of this:
Say that you've got a standard deck of 52 cards. You pick 10 cards from this list at random while allowing yourself to pick duplicate cards (ie. you could end up picking the same card multiple times). The chance that you'll pick any card in particular is equal to:
10 * 1/52 = 10/52
Because the odds of you picking the card each time is 1/52 and you repeat that 10 times. Now, let's assume that we've picked our card in the first group: how likely is it going to be that we're going to pick it from the second group? Well we'll have a 1 in 10 chance to pick it now!
The probability that we'll pick any particular card in the first pick and then the second pick is:
10/52 * 1/10 = 1/52
Which is the exact same probability as picking any old card in the first place!
A while back I wrote a simple python program to brute-force the single solution for the drive ya nuts puzzle.
(source: tabbykat.com)
The puzzle consists of 7 hexagons with the numbers 1-6 on them, and all pieces must be aligned so that each number is adjacent to the same number on the next piece.
The puzzle has ~1.4G non-unique possibilities: you have 7! options to sort the pieces by order (for example, center=0, top=1, continuing in clockwise order...). After you sorted the pieces, you can rotate each piece in 6 ways (each piece is a hexagon), so you get 6**7 possible rotations for a given permutation of the 7 pieces. Totalling: 7!*(6**7)=~1.4G possibilities. The following python code generates these possible solutions:
def rotations(p):
for i in range(len(p)):
yield p[i:] + p[:i]
def permutations(l):
if len(l)<=1:
yield l
else:
for perm in permutations(l[1:]):
for i in range(len(perm)+1):
yield perm[:i] + l[0:1] + perm[i:]
def constructs(l):
for p in permutations(l):
for c in product(*(rotations(x) for x in p)):
yield c
However, note that the puzzle has only ~0.2G unique possible solutions, as you must divide the total number of possibilities by 6 since each possible solution is equivalent to 5 other solutions (simply rotate the entire puzzle by 1/6 a turn).
Is there a better way to generate only the unique possibilities for this puzzle?
To get only unique valid solutions, you can fix the orientation of the piece in the center. For example, you can assume that that the "1" on the piece in the center is always pointing "up".
If you're not already doing so, you can make your program much more efficient by checking for a valid solution after placing each piece. Once you've placed two pieces in an invalid way, you don't need to enumerate all of the other invalid combinations.
If there were no piece in the centre, this would be easy. Simply consider only the situations where piece 0 is at the top.
But we can extend that idea to the actual situation. You can consider only the situations where piece i is in the centre, and piece (i+1) % 7 is at the top.
I think the search space is quite small, though the programming might be awkward.
We have seven choices for the centre piece. Then we have 6 choices for the
piece above that but its orientation is fixed, as its bottom edge must match the top edge of the centre piece, and similarly whenever we choose a piece to go in a slot, the orientation is fixed.
There are fewer choices for the remaining pieces. Suppose for
example we had chosen the centre piece and top piece as in the picture; then the
top right piece must have (clockwise) consecutive edges (5,3) to match the pieces in
place, and only three of the pieces have such a pair of edges (and in fact we've already
chosen one of them as the centre piece).
One could first off build a table with a list
of pieces for each edge pair, and then for each of the 42 choices of centre and top
proceed clockwise, choosing only among the pieces that have the required pair of edges (to match the centre piece and the previously placed piece) and backtracking if there are no such pieces.
I reckon the most common pair of edges is (1,6) which occurs on 4 pieces, two other edge pairs ((6,5) and (5,3)) occur on 3 pieces, there are 9 edge pairs that occur on two pieces, 14
that occur on 1 piece and 4 that don't occur at all.
So a very pessimistic estimate of the number of choices we must make is
7*6*4*3*3*2 or 3024.