I've been looking all over and cannot find a straightforward way of accomplishing this. I know the nba api endpoint shotchartlineupdetail is useful, I'm just not quite sure how to use it properly. My attempt is to use the group_id parameter to figure out which group is on the floor. Ideally, I'd like to:
Create a shot chart for player A
Go through all those shots and sort them based on the line-up on the court (theoretically through group_id)
Based on these lineups, create 2 shot charts where shot chart A is all the shots for player A when player B is on the court, and shot chart B where it is all the shots for player A when player B is NOT on the court
In a perfect world, I'd like to see what the data is on shots assisted by player B as well. But, I don't know if that's too much to ask
I tried accessing the stats on nba.com but I could not reach the appropriate page for shotchartdetail. I have looked at the code here: https://github.com/shanefenske/nba-shots-wowy/tree/8bc2bd4245e304128742d17799219e7e45333adc and here: How to get nba shot chart data correctly? but nothing uses shotchartlineupdetail
Related
I've been trying to tailor this code to work for my desktop so I can auto-farm some enemies in this game I play. Right now the code is stuck at a point where it scans the UI at a certain pixel for a certain shade of blue. if it's the correct shade, the code progresses. if not, it completely stops. I'm not sure what to do but if someone could take a look I'd greatly appreciate it. Here are some screenshots and the code:
Also, the code is broken into two pieces: the first script which allows for interaction with the game, and the second being programming automatic movement, clicking etcetera. the second piece is simple enough where it doesn't need to be put here, nor is it necessary.
First Piece (main problem is in the POSITION and COLOR part of def is_mana_low:
def is_mana_low(self):
self.set_active()
# Matches a pixel in the lower third of the mana globe
POSITION = (79, 565)
COLOR = (66, 13, 83)
THRESHOLD = 10
return not self.pixel_matches_color(POSITION, COLOR, threshold=THRESHOLD)
def use_potion_if_needed(self):
mana_low = self.is_mana_low()
health_low = self.is_health_low()
if mana_low:
print('Mana is low, using potion')
if health_low:
print('Health is low, using potion')
if mana_low or health_low:
self.click(160, 590, delay=.2)
My Discord is APieceofString#5151 if you wanted to hit me up for more information or a better explanation. I really appreciate this :)
I had the same issue when setting it to a specific x,y coordinate. So here's how I resolved this:
I used pyautogui and cv2. I got a screen cap, and then created a fairly thin rectangle, right below where the yellow number showing your remaining mana is, and going from the edge of the Energy globe to the edge of the mana globe. And then while my script is running, it checks for a match of that rectangle. If there's a match, you have plenty of mana, if not, you may want to drink a potion.
I tried to do the same thing to check for mana wisps, and if it sees one on the screen, run to the wisp, and then back to the home position (but that function is kind of hinky atm and I'm trying to get it working better).
Not an important question, I know, but just a silly side project a friend asked of me. Is there a way I can read and alter the inputs from a drawing tablet so that I can use it in a First Person Shooter game, using something like Python.
From what I know, the pen will set the mouse position to the location corresponding to the tablet, which causes you to stare at the ground and spin, verses a mouse changing the current location of the mouse, which is what a game is expecting.
My plan is to read the location of the pen compared to the last location, have the mouse moved to that location on the screen, instead of set to the location.
Any tips or guidance is appreciated!
It is already possible, if you change the drawing tablet input mode from absolute to relative
I am working on a project which scrapes steam sales data and generate an infographic based on the data I scraped. I have finished data scraping part including the pictures and is now working on the infographic.
The scraped data looks like this:
[Neon Chrome,-71%,¥ 14,86.79%,4 days, Neon Chrome is a cyberpunk-themed twin-stick shooter video game played from a top-down perspective. The player takes control of remote controlled human clones and is tasked with eliminating the Overseer to stop their oppressive regime on the dystopic society.]
[Crossing Souls,-75%,¥ 12,78.79%,1 day,11 hours ago,1 year ago,1,331690, Crossing Souls is a inventive thrill ride that embraces clever, varied gameplay and heartfelt storytelling to coalesce into a gem of a game.]
Ideally i want to reorganize the texts and generate infographic from scratch like this (some info are just for showcase):
However, I really do not know how to generate such a picture using python or other automated methods. I found several libraries like pillow which have basic apis like import picture, draw lines and texts, but it cannot handle the layout proply since my infographic could have multiple number of cards, and the text's length may varies, so I cannot hardcode the text or image to appear at (x,y) positions.
I'm trying to program an experiment in which I want to find out how humans cognitively segment movement streams. For example, if the movement stream could is a person climbing a flight of stairs, each step might be a single segment.
The study is bascially a replication of this one here, but with another set of stimuli: http://dl.acm.org/citation.cfm?doid=2010325.2010326
Each trial should be structured like the following:
Present a video of a motion stream. Display a bar beneath the video that has a marker that moves in sync with the current time of the video (very similar to GUI of a video player).
Present that video again, but now let the participant add stationary markers to the bar beneath the video by pressing a key. The marker is supposed to be placed at the time point in the video bar that corresponds with the time the buttom was pressed (e.g. when the video is 100 seconds long and the buttom was pressed 10 seconds into the video, it should be placed at the 10% mark of the bar).
My instructor suggested programming the whole thing using PsychoPy. PsychoPy currently only supports Python 2.7.
I've looked into the program and it looks promising. One can display a video easily and the rating scale class is similar to the bar we want to implement. However, several features are missing, namely:
One can only set a single marker, subjects should be able to set multiples
As mentioned in point (1) we want to have a marker that moves in synch with the video.
When a key press occurs a marker should be placed at the point in the bar that corresponds with the current time point in the video.
Hence my questions: Do you have any tips for implementing the features described above using the PsychoPy module?
I don't know how much this gets into recommendation question territory, but in case you know of a module for writing experiment GUIs that has widgets with the features we want for this experiment I would be curious to hear about them.
PsychoPy is a good choice for this. The rating scale however (as you note) is probably not the right tool for creating the markers. You can make simple polygon shapes though, which could serve as your multiple markers as well as the continuous time indicator.
e.g. you could make a polygon stimulus with three vertices (to make a triangle indicator) and set its location to be something like this (assuming you are using normalised coordinates):
$[((t/movie_duration) * 2 - 1) , -0.9]
t is a Builder variable that represents the time elapsed in the current trial in seconds. The centre of the screen is at coordinates [0, 0]. So the code above would make the pointer move smoothly from the left hand edge of the screen to the right, close to the bottom edge of the screen, reaching the right hand edge once the move ends. Set the polygon's position field to update every frame so that the animation is continuous.
movie_duration is a placeholder variable for the duration of the movie in seconds. You could specify this in your conditions file, or you can query the movie component to get its duration I think, something like:
$[((t/movie_stim_name.duration()) * 2 - 1) , -0.9]
You could leave markers on the screen in response to keypresses in a similar way, but this would require a little Python code in a code component.
For some research purpose I've build a small tanks game where you have 1 tank controlled by a player and one or more NPC tanks.
Now I want these NPC tanks to navigate through a field which they have no knowledge of. They can detect obstacles if they are in a certain range. If they detect those obstacles they should save them in a certain data construct that's easy to query. So that they can take them in account when moving.
Now here is where I'm stuck : if my field would be a grid it would be quite easy for me, I would just save which tiles/nodes the obstacle is on.
But I haven't really worked with a grid, my tanks just move forward a few pixels depending on their speed, so a tank can be located on any pixel combination, as well as the obstacles.
Now how would I handle this? Collision detection is out of scope.
Am I forced to use some kind of grid or waypoints ?
why not use a navigation mesh solution? seems like exactly what you're looking for, a method for representing a domain for ai navigation with arbitrary polygonal obstacles.
github is down at the moment, but according to this website (which is worth checking out, it's an interesting Java implementation), this project has a python navigation mesh implementation.
edit
Based on your comments below, I think that a hierarchical representation is actually closer to the answer you are looking for. This article links to a paper describing how to abstract the pixel-by-pixel grid (with arbitrary shaped obstacles) into a node-edge graph for increased speed in navigation calculations. By incorporating this type of hierarchical representation with a dynamic navigation algorithm such as d* (see this answer for an overview of dynamic navigation algorithms), you should be able to implement a solution to your problem.