Process image from screenshot - Python - python

I'm trying to take a screenshot of a game, bejeweled (an 8x8 board), and extract the board position from the screenshot. I've tried Image/Imagestat, autopy, and grabbing individual pixels from the middle of the slots but these haven't worked. So I'm thinking taking the average value for each square of the 8x8 grid would identify each piece - but I've been unable to do so with Image/Imagestat and autopy.
Anyone know a way to get the pixel or color values for a region of an image? Or a better way to identify segments of an image with a dominant color?

I've found a way to do this with PIL using Imagegrab and ImageStat. Here's to grab the screen and crop to the game window:
def getScreen():
# Grab image and crop it to the desired window. Find pixel borders manually.
box = (left, top, right, bottom)
im = ImageGrab.grab().crop(box)
#im.save('testcrop.jpg') # optionally save your crop
for y in reversed(range(8)):
for x in reversed(range(8)):
#sqh,sqw are the height and width of each piece.
#each pieceim is one of the game piece squares
piecebox = ( sqw*(x), sqh*(y), sqw*(x+1), sqh*(y+1))
pieceim = im.crop(piecebox)
#pieceim.save('piececrop_xy_'+ str(x) + str(y) + '.jpg')
stats = ImageStat.Stat(pieceim)
statsmean = stats.mean
Rows[x][y] = whichpiece(statsmean)
The above creates an image for all 64 pieces, identifies piecetype, and stores that in the array of arrays 'Rows'. I then grabbed the average RGB values with stats.mean for each piecetype and stored them in a dictionary (rgbdict). Copy all the outputs into Excel and filter by color type to get those averages. Then I used an RSS method and that dictionary to statistically match the images to the known piecetypes. (RSS ref: http://www.charlesrcook.com/archive/2010/09/05/creating-a-bejeweled-blitz-bot-in-c.aspx)
rgbdict = {
'blue':[65.48478993, 149.0030965, 179.4636593], #1
'red':[105.3613444,55.95710092, 36.07481793], #2
......
}
def whichpiece(statsmean):
bestScore = 100
curScore= 0
pieceColor = 'empty'
for key in rgbdict.keys():
curScore = (math.pow( (statsmean[0]/255) - (rgbdict[key][0]/255), 2)
+ math.pow( (statsmean[1]/255) - (rgbdict[key][1]/255), 2)
+ math.pow( (statsmean[2]/255) - (rgbdict[key][2]/255), 2) )
if curScore < bestScore:
pieceColor = key
bestScore = curScore
return piececolor
With these two functions the screen can be scraped, and the board state transferred into an array upon which moves can be decided. Best of luck if this helps anyone, and let me know if you fine tune a move picker.

Related

Is there any way to join multiple vector(in jpg format) images without any overlapping?

I have a puzzle in which the image is divided into 32 tiles. These tiles are shuffled. Now I want to solve this puzzle using python and OpenCV. The problem is there is no overlapping in the images.
Example
The benefit is the image has uniform pixels since it is converted from a vector image.
What I tried is to find each image on each side for a given image based on a score for each side of the given image.
What is the score?
So I take one image and for one side let's say right find the difference between last column(right side) of the image and the first column(left side) of all the other images and sum that difference and call that score.
#rightScore[i] will store the index of image on right side of the image at index 'i'.
#right[i] will store the pixels of right side of image at index 'i'.
#similarly for other sides.
rightScore = np.zeros((36,))
topScore = np.zeros((36,))
bottomScore = np.zeros((36,))
leftScore = np.zeros((36,))
for i in range(36):
score = np.inf
for j in range(36):
if i==j:
continue
temp = np.sum(np.abs(np.ravel(right[i] - left[j])))
if score > temp:
rightScore[i] = j
score = temp
Now for a given image I generate score with all other images and then find the minimum score. The corresponding image with minimum score is the image that will be on the left of the given image. I do this for all side.
This method works for some images but not for all. Can anyone help?
Also, I know the final image will have 12 rows with 3 tiles each(12 * 3 ).

How can I get the dimensions of a picture placeholder to re-size an image when creating a presentation and inserting a picture using python-pptx?

I'm trying to insert a picture that is re-sized to fit the dimensions of the picture placeholder from a template using python-pptx. I don't believe the API has direct access to this from what I can find out in the docs. Is there any suggestion of how I might be able to do this, using the library or other?
I have a running code that will insert a series of images into a set of template slides to automatically create a report using Powerpoint.
Here is the function that is doing the majority of the work relevant. Other parts of the app are creating the Presentation and inserting a slide etc.
def insert_images(slide, slide_num, images_path, image_df):
"""
Insert images into a slide.
:param slide: = slide object from Presentation class
:param slide_num: the template slide number for formatting
:param images_path: the directory to the folder with all the images
:param image_df: Pandas data frame regarding information of each image in images_path
:return: None
"""
placeholders = get_image_placeholders(slide)
#print(placeholders)
image_pool = image_df[image_df['slide_num'] == slide_num]
try:
assert len(placeholders) == len(image_pool.index)
except AssertionError:
print('Length of placeholders in slide does not match image naming.')
i = 0
for idx, image in image_pool.iterrows():
#print(image)
image_path = os.path.join(images_path, image.path)
pic = slide.placeholders[placeholders[i]].insert_picture(image_path)
#print(image.path)
# TODO: Add resize - get dimensions of pic placeholder
line = pic.line
print(image['view'])
if image['view'] == 'red':
line.color.rgb = RGBColor(255, 0, 0)
elif image['view'] == 'green':
line.color.rgb = RGBColor(0, 255, 0)
elif image['view'] == 'blue':
line.color.rgb = RGBColor(0, 0, 255)
else:
line.color.rgb = RGBColor(0, 0, 0)
line.width = Pt(2.25)
i+=1
The issue is that when I insert a picture into the picture placeholder, the image is cropped, not re-sized. I don't want the user to know the dimensions to hard code into my script. If the image used is relatively large it can crop a very large portion and just not be usable.
The picture object returned by PicturePlaceholder.insert_picture() has the same position and size as the placeholder it derives from. It is cropped to completely fill that space. Either the tops and bottoms are cropped or the left and right sides, depending on the relative aspect ratio of the placeholder and the image you insert. This is the same behavior PowerPoint exhibits when you insert a picture into a picture placeholder.
If you want to remove the cropping, simply set all cropping values to 0:
picture = placeholder.insert_picture(...)
picture.crop_top = 0
picture.crop_left = 0
picture.crop_bottom = 0
picture.crop_right = 0
This will not change the position (of the top-left corner) but will almost always change the size, making it either wider or taller (but not both).
So this solves the first problem easily, but of course presents you with a second one, which is how to position the picture where you want it and how to scale it appropriately without changing the aspect ratio (stretching or squeezing it).
This depends a great deal on what you're trying to accomplish and what outcome you find most pleasing. This is why it is not automatic; it's just not possible to predict.
You can find the "native" width and height of the image like this:
width, height = picture.image.size # ---width and height are int pixel-counts
From there you'll need to compare aspect ratios of the original placeholder and the image you inserted and either adjust the width or height of the picture shape.
So say you wanted to keep the same position but maintain the width and height of the placeholder as respective maximums such that the entire picture fits in the space but has a "margin" either on the bottom or the right:
available_width = picture.width
available_height = picture.height
image_width, image_height = picture.image.size
placeholder_aspect_ratio = float(available_width) / float(available_height)
image_aspect_ratio = float(image_width) / float(image_height)
# Get initial image placeholder left and top positions
pos_left, pos_top = picture.left, picture.top
picture.crop_top = 0
picture.crop_left = 0
picture.crop_bottom = 0
picture.crop_right = 0
# ---if the placeholder is "wider" in aspect, shrink the picture width while
# ---maintaining the image aspect ratio
if placeholder_aspect_ratio > image_aspect_ratio:
picture.width = int(image_aspect_ratio * available_height)
picture.height = available_height
# ---otherwise shrink the height
else:
picture.height = int(available_width/image_aspect_ratio)
picture.width = available_width
# Set the picture left and top position to the initial placeholder one
picture.left, picture.top = pos_left, pos_top
# Or if we want to center it vertically:
# picture.top = picture.top + int(picture.height/2)
This could be elaborated to "center" the image within the original space and perhaps to use "negative cropping" to retain the original placeholder size.
I haven't tested this and you might need to make some adjustments, but hopefully this gives you an idea how to proceed. This would be a good thing to extract to its own function, like adjust_picture_to_fit(picture).
This worked for me. My image is larger than the placeholder (slide.shapes[2]).
picture = slide.shapes[2].insert_picture(img_path)
picture.crop_top = 0
picture.crop_left = 0
picture.crop_bottom = 0
picture.crop_right = 0

How to detect colored text from 6 meters away?

I am using python, PIL, opencv and numpy to detect single color texts (i.e one is red, one is green). I want to detect these colorful text up to 6 meters away during live stream. I have used color detection methods but they did not work after 30-50 cm. Camera should be close to colors. As a second method to detect these texts, I used ctpn method. Although it detects texts, It does not provide the coordinate of these texts since I need coordinate points of texts also. I also tried OCR method in Matlab to automatically detect text in natural image but it failed since it finds another small objects as text. I am so stuck about what to do.
Let say for example, there are two different texts in an image that is captured 6 meters away. One text is green, the other one is red. The width of these texts are approximately 40-50 cm. In addition, they are only two different words, not long texts. How can I detect them and specify their location as (x1,y1) and (x2,y2)? Is that possible ? needy for any succesfull hint ?
import numpy as np
from PIL import Image
# Open image and make RGB and HSV versions
RGBim = Image.open("AdjustedNewMaze3.jpg").convert('RGB')
HSVim = RGBim.convert('HSV')
# Make numpy versions
RGBna = np.array(RGBim)
HSVna = np.array(HSVim)
# Extract Hue
H = HSVna[:,:,0]
# Find all green pixels, i.e. where 100 < Hue < 140
lo,hi = 100,140
# Rescale to 0-255, rather than 0-360 because we are using uint8
lo = int((lo * 255) / 360)
hi = int((hi * 255) / 360)
green = np.where((H>lo) & (H<hi))
# Make all green pixels black in original image
RGBna[green] = [0,0,0]
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
value = 120 & 125
green = find_nearest(RGBna, value)
print(green)
count = green[0].size
print("Pixels matched: {}".format(count))
Image.fromarray(green).save('resultgreen.png')

how to manipulate an image in python to mimic macular degenerative

Im trying to find a good package or algorithm to modify an image to push the center of an image outwards to mimic macular degen. The best method I found was image_slicer package and split image into 4 pieces, push the inner corners and stitch images back. But, the join method of the package is not working and documentation is unclear. Does anyone have a package that can do this?
Also, I am trying to push the outside of an image in, to create tunnel vision.
(for both of these I am still trying to preserve the image, although skewed is fine, I am trying to prevent image loss.)
some code I wrote
import image_slicer
#split image into 4 pieces
image_slicer.slice('piegraph.jpeg',4) #just a simple sample img
#code to resize corners
#I can figure this out later.
#stitch images back
tiles = ("pie_01_01.png","pie_01_02.png","pie_02_01.png","pie_02_02.png")
image_slicer.join(tiles)
You can use opencv and numpy to do what you want.
If I understand correctly what you need is a mapping that take the original image and maps them as a function of the distance from the center of the image.
For all the pixels inside the "black hole" you want to be black and all others you want them to be bunched together.
So if we take the original image to be:
The result you are looking for is something like:
The following code dose this. The parameters that you need to play with are
RBlackHole - The radius of your black hole
FACTOR - Changes the amount of "bunching" too small and all the pixels will mapped also to black too large and they will not be bunched.
import cv2
import numpy as np
import math
# Read img
img = cv2.imread('earth.jpg')
rows,cols,ch = img.shape
# Params
FACTOR = 75
RBlackHole = 10
# Create a 2d mapping between the image and a new warp
smallSize = min(rows,cols)
xMap = np.zeros((rows,cols), np.float32)
yMap = np.zeros_like(xMap)
for i in range(rows):
for j in range(cols):
# Calculate the distance of the current pixel from the cneter of the image
r = math.sqrt((i-rows/2)*(i-rows/2) + (j-cols/2)*(j-cols/2))
# If the pixles are in the radius of the black hole
# mapped them to a location outside of the image.
if r <= RBlackHole:
xMap[i, j] = rows*cols
yMap[i, j] = rows*cols
else:
# Mapped the pixels as a function of the distance from the center.
# The further thay are the "buncher thay will be"
xMap[i, j] = (r-RBlackHole)*(j - cols/2)/FACTOR + cols/2
yMap[i, j] = (r-RBlackHole)*(i - rows/2)/FACTOR + rows/2
# Applay the remmaping
dstImg = cv2.remap(img,xMap,yMap,cv2.INTER_CUBIC)
# Save output image
cv2.imwrite("blackHoleWorld.jpg", dstImg)

Dithering in JES/Jython

My goal is to dither an image in JES/Jython using the Floyd-Steinberg method. Here is what I have so far:
def Dither_RGB (Canvas):
for Y in range(getHeight(Canvas)):
for X in range(getWidth(Canvas)):
P = getColor(Canvas,X,Y)
E = getColor(Canvas,X+1,Y)
SW = getColor(Canvas,X-1,Y+1)
S = getColor(Canvas,X,Y+1)
SE = getColor(Canvas,X+1,Y+1)
return
The goal of the above code is to scan through the image's pixels and process the neighboring pixels needed for Floyd-Steinberg.
What I'm having trouble understanding is how to go about calculating and distributing the differences in R,G,B between the old pixel and the new pixel.
Anything that could point me in the right direction would be greatly appreciated.
I don't know anything about the method you are trying to implement, but for the rest: Assuming Canvas is of type Picture, you can't get directly the color that way. The color of a pixel can be obtained from a variable of type Pixel:
Example: Here is the procedure to get the color of each pixels from an image and assign them at the exact same position in a new picture:
def copy(old_picture):
# Create a picture to be returned, of the exact same size than the source one
new_picture = makeEmptyPicture(old_picture.getWidth(), old_picture.getHeight())
# Process copy pixel by pixel
for x in xrange(old_picture.getWidth()):
for y in xrange(old_picture.getHeight()):
# Get the source pixel at (x,y)
old_pixel = getPixel(old_picture, x, y)
# Get the pixel at (x,y) from the resulting new picture
# which remains blank until you assign it a color
new_pixel = getPixel(new_picture, x, y)
# Grab the color of the previously selected source pixel
# and assign it to the resulting new picture
setColor(new_pixel, getColor(old_pixel))
return new_picture
file = pickAFile()
old_pic = makePicture(file)
new_pic = copy(old_pic)
Note: The example above applies only if you want to work on a new picture without modifying the old one. If your algorithm requires to modify the old picture on the fly while performing the algorithm, the final setColor would have been applied directly to the original pixel (no need for a new picture, neither the return statement).
Starting from here, you can compute anything you want by manipulating the RGB values of a pixel (using setRed(), setGreen() and setBlue() functions applied to a Pixel, or col = makeColor(red_val, green_val, blue_val) and apply the returned color to a pixel using setColor(a_pixel, col)).
Example of RGB manipulations here.
Some others here and especially here.

Categories

Resources