I have this python code where "Employee" is a variable.
Whenever I search for a company in my first JSON file, it gives me the responsible employee and assign it's name to "EMPLOYEE" variable.
In another JSON file, i have a list of the employees with their address and emails.
What I want is, whenever an employee is fetched using the first JSON file, I would want the second file to get on board, and pulls his email + home address.
Context:
Context: A company wants an appointment, so we check who's free to assign this company a calendar slot.
Users Job : insert a date, time, and company name.
Purpose of the code: each company in my file has 3 assigned employees to it, by order (1st JSON file). The code will check the first one, then check in my google calendar if he's busy, if he is, it will check the second one...and so on.
# Reads the json data
with open('convertcsv.json') as json_file:
data = json.load(json_file)
employeesChosen = []
event_email = 'abc#abc.com'
event_start = '2020-05-9T13:00:00'
event_end = '2020-05-09T15:00:00'
employeeInsert = False
# Adds all the current employees for the company picked
for i in range(len(data)):
if data[i]['name_enterprise'] == event_fabricant:
employeesChosen.append(data[i]['employee1'])
employeesChosen.append(data[i]['employee2'])
employeesChosen.append(data[i]['employee3'])
location = data[i]['location']
print("Employees found")
break
If you want a data change trigger some other action, you'll need command/script. If you have them in a git repo, you could use a kind of git-hooks. Or any cron jobs to check the content of the first file and do something out of the second file (e.g. getting the email and address).
Related
There exists a public Google calendar whose calendar ID I have. I would like to set a start and end date and then get all the events between these dates in a CSV file. What is the most efficient way to do this in Python?
The closest solution I have found is https://stackoverflow.com/a/27213635/1936752 which yields a json file and does not have the filtering by date. I could do this with the json file and writing some code to filter only the dates I want and then export to csv, but I guess there is a smarter way?
The manual way of doing what I want is to download the ics file using the "Export Calendar" function and then using an ics to csv converted like https://openicsfile.com/csv-convert.html. I can then filter easily the dates I want. I wish to do exactly this but using a Python script.
I believe your goal is as follows.
You want to retrieve the events from a publicly shared Google Calendar.
You want to retrieve the event title and the date as a CSV file.
You want to achieve this using python.
In this case, how about the following sample script?
Sample script:
In this sample script, the event list is retrieved using "Events: list" of Calendar API with API key. So, please retrieve your API key. Ref And, please enable Calendar API at the API console.
And, please set the variables of the following sample script.
import csv
from googleapiclient.discovery import build
api_key = "###" # Please set your API key.
calendar_id = "###" # Please set the calendar ID.
start = "2022-10-01T00:00:00Z" # Please set the start date you want to search.
end = "2022-10-31T00:00:00Z" # Please set the end date you want to search.
# Retrieve event list using googleapis for python.
service = build("calendar", "v3", developerKey=api_key)
events_result = service.events().list(calendarId=calendar_id, timeMin=start, timeMax=end, fields="items(summary,start,end)", timeZone="UTC").execute()
# Retrieve the event title and date from all-day events.
allDayEvents = [["Event title", "Date"], *[[e.get("summary"), e.get("start").get("date")] for e in events_result.get("items", []) if e.get("start").get("date") and e.get("end").get("date")]]
# Output the retrieved values as a CSV file.
with open("sample.csv", "w") as f:
writer = csv.writer(f, lineterminator="\n")
writer.writerows(allDayEvents)
When this script is run, the event list is retrieved from the publicly shared Google Calendar using an API key. And, a CSV file is created by including the event title and the date. In this sample, the all-day events from October 1, 2022 to October 31, 2022 are retrieved.
You can see about the googleapis for python at here and here.
References:
Authenticate using API keys
Events: list
I'm coding a program for Hypixel SkyBlock Minecraft that tells me the price of items that can be sold for a higher price.
The program is done but I'm running into a issue. I have made a config.json file and it contains information like discord webhook, lowest price, and to notify the user if a item is found. I'm having problems with the lowest price function, basically when I set the lowest price to 1 in the config, the program doesn't work and gets a traceback.
The traceback doesn't matter because I know the issue. When I put the number into config it doesn't work but when i set lowestprice = 1 in program manually it works like when I put the number into config the program thinks maybe the number is a text or something. Here's the config code
with open("config.json", "r") as f:
data = json.load(f)
webhookread = data["webhook"]
notifyread = data["notify"]
lowestpriceread = data["lowestbin"]
WEBHOOK = webhookread
LOWEST_PRICE = lowestpriceread - THE ISSUE
NOTIFY = notifyread
Is there a way I could make the config file put the number as a real number not a text or any of that, so I can still use the config for numbers?
Are the numbers in the json file stored as strings or as numbers? if the json looks like this:
{
"lowestbin" : "123.45"
}
then the price is saved as string and will need to cast to a float type. This is simple to do:
lowestpriceread = float(data["lowestbin"])
Note: this code will thrown an exception if the data in the json cannot be converted into a float.
I have a PYQT5 game for sports and I am using a CSV file for the data I would like to allow the user to go back and forward rows (games) when there is more than one game on the same day(date).
At the moment I am only using the next() from the CSV module this works fine and in the next_game() method I can access the data from the row and update my labels on the UI. But when I looked for a previous() or a similar method I could not find one
def next_game():
next_Game = next(csvreader_fix)
self.next_match.clicked.connect(lambda: next_game())
I would like to allow the user to have a back button and update the UI to the previous game.
I was thinking of storing the game data in a dictionary every time next_game() is called and using the keys to display the data for each game on the UI. I think that this will also boost performance as I will be using hashing rather than reading data from a CSV file when the user switches games and the UI is updated.
The key's will be the game number in the season so it will be easy to subtract or add 1 when a button is pressed and iterate through the dictionary.
Once the user has finished looking at that days matches the data can be discarded from the dictionary and the next days games would be placed in the dictionary every time `next_game()` is called.
Am I over complicating things and there is an easier way of achieving what I am trying to using pandas another module?
At the moment I am reading the csv file like so
csvreader_fix = csv.reader(fixtures)
fix = next(csvreader_fix)
In the 'next_game()' I access each index in the row that I need like this
def next_game():
next_Game = next(csvreader_fix)
result = next_Game[6]
home_team = next_Game[2]
away_team = next_Game[3]
One option would be to use the pandas module which has more functionality.
But if you really want to use this module, you can convert the whole CSV file into a list and then access the rows in a way as you would access a list.
csvr = csv.reader(file)
game_data = list(csvr)
i = 0 # Or 1, if you have a header in your csv
game = game_data[i]
def next_game():
i += 1
next_Game = game_data[i]
def previous_game():
i -= 1
previous_Game = game_data[i]
So I'm using python-telegram-bot for telegram integration into another application. My goal is to have the profile pictures of a user on telegram within my application. (Users and group chats)
Getting a user's or group's avatar is easy, so is downloading and using it in my app. However, what if the user changes their profile picture? I couldn't find any update message or handler in the documentations that allows for a bot to retrieve a picture change, not even for groups.
My first thought was to first retrieve all pictures and store the file_id in a database, then periodically check that user's/group's pictures and go back through their pictures until file_id matches the last saved file_id in the database.
This combined with a JobQueue is the best thing I can come up with, so I'll self-answer using that, but I think it's still not a perfect solution so if anyone has a better idea I'd appreciate an answer.
I'm specifically looking for a better solution for groups, since I don't think there is a way to retrieve any but the most recent picture for groups, and my application should retrieve all of them. Another flaw my self-answer has is that if a user changes the profile picture twice within those six hours, I will only get the most recent one. This can be fixed for users with the offset attribute in the bot call, but the method to get profile pictures of a group does not seem to have that.
tl;dr:
How can I retrieve updates whenever a user changes their or a groups profile picture the most efficient and reliable way using python-telegram-bot and python 3.5?
This is using telegram.ext.JobQueue to check for profile picture updates every 6 hours.
# define job queue
j = updater.job_queue
def dl_pfps(bot, job):
# this assumes that we have a textfile with the following
# layout: "user_id:last_pfp_file_id" - One per line
# later we'll write a list back into it with the newest IDs
user_pfp_list = []
with open("user_pfps.txt") as f:
for line in f:
user_id = line.split(':')[0]
last_file_id = line.split(':')[1]
most_recent_pfp = bot.get_user_profile_photos(user_id, limit=1).photos[0]
if last_file_id == most_recent_pfp[-1].file_id:
print("No change")
user_pfp_list.append(user_id + ":" + last_file_id)
else:
print("User updated profile picture. Geting full size picture...")
# download and process the picture
file_id = most_recent_pfp[-1].file_id
newFile = bot.getFile(file_id)
newFile.download('my/filename.jpg')
user_pfp_list.append(user_id + ":" + file_id)
# write new list back to file (overwrite current list)
with open("user_pfps.txt", "w") as f:
f.write("\n".join(user_pfp_list))
# check for new profile pictures every 6 hours
job_dlpfps = j.run_repeating(dl_pfps, interval=21600, first=0)
This is the best I can come up with. If you want to use this in your code you have to adjust 'my/filename.jpg' to a proper filename and you need to generate an initial list in user_pfps.txt with one line per user like this: user_id:0
This is my function to build a record of user's performed action in python csv. It will get the username from the global and perform increment given in the amount parameter to the specific location of the csv, matching the user's row and current date.
In brief, the function will read the csv in a list, and do any modification on the data before rewriting the whole list back into the csv file.
Every first item on rows is the username, and the header has the dates.
Accs\Dates,12/25/2016,12/26/2016,12/27/2016
user1,217,338,653
user2,261,0,34
user3,0,140,455
However, I'm not sure why sometimes, the header get's pushed down to the second row, and data gets wiped entirely when it crashes.
Also, I need to point out that there maybe multiple script running this function and writing on the same file, not sure if that causing the issue.
I'm thinking maybe I can write the stats separately and uniquely to each users and combine later, hence eliminating the possible clashing in writing. Although would be great if I could just improve from what I have here and read/write everything on a file.
Any fail-safe way to do what I'm trying to do here?
# Search current user in first rows and updating the count on the column (today's date)
# 'amount' will be added to the respective position
def dailyStats(self, amount, code = None):
def initStats():
# prepping table
with open(self.stats, 'r') as f:
reader = csv.reader(f)
for row in reader:
if row:
self.statsTable.append(row)
self.statsNames.append(row[0])
def getIndex(list, match):
# get the index of the matched date or user
for i, j in enumerate(list):
if j == match:
return i
self.statsTable = []
self.statsNames = []
self.statsDates = None
initStats()
today = datetime.datetime.now().strftime('%m/%d/%Y')
user_index = None
today_index = None
# append header if the csv is empty
if len(self.statsTable) == 0:
self.statsTable.append([r'Accs\Dates'])
# rebuild updated table
initStats()
# add new user/date if not found in first row/column
self.statsDates = self.statsTable[0]
if getIndex(self.statsNames, self.username) is None:
self.statsTable.append([self.username])
if getIndex(self.statsDates, today) is None:
self.statsDates.append(today)
# rebuild statsNames after table appended
self.statsNames = []
for row in self.statsTable:
self.statsNames.append(row[0])
# getting the index of user (row) and date (column)
user_index = getIndex(self.statsNames, self.username)
today_index = getIndex(self.statsDates, today)
# the row where user is matched, if there are previous dates than today which
# has no data, append 0 (e.g. user1,0,0,0,) until the column where today's date is match
if len(self.statsTable[user_index]) < today_index + 1:
for i in range(0,today_index + 1 - len(self.statsTable[user_index])):
self.statsTable[user_index].append(0)
# insert pv or tb code if found
if code is None:
self.statsTable[user_index][today_index] = amount + int(re.match(r'\b\d+?\b', str(self.statsTable[user_index][today_index])).group(0))
else:
self.statsTable[user_index][today_index] = str(re.match(r'\b\d+?\b', str(self.statsTable[user_index][today_index])).group(0)) + ' - ' + code
# Writing final table
with open(self.stats, 'w', newline='') as f:
writer = csv.writer(f)
writer.writerows(self.statsTable)
# return the summation of the user's total count
total_follow = 0
for i in range(1, len(self.statsTable[user_index])):
total_follow += int(re.match(r'\b\d+?\b', str(self.statsTable[user_index][i])).group(0))
return total_follow
As David Z says, concurrency is more likely the cause of your problem.
I will add that CSV format is not suitable for Database storing, indexing, sorting, because it is plain/text and sequential.
You could handle it using a RDBMS for storing and updating your data and periodically processing your stats. Then your CSV format is just an import/export format.
Python offers a SQLite binding in its Standard Library, if you build a connector that import/update CSV content in a SQLite schema and then dump results as CSV you will be able to handle concurency and keep your native format without worring about installing a database server and installing new packages in Python.
Also, I need to point out that there maybe multiple script running this function and writing on the same file, not sure if that causing the issue.
More likely than not that is exactly your issue. When two things are trying to write to the same file at the same time, the outputs from the two sources can easily get mixed up together, resulting in a file full of gibberish.
An easy way to fix this is just what you mentioned in the question, have each different process (or thread) write to its own file and then have separate code to combine all those files in the end. That's what I would probably do.
If you don't want to do that, what you can do is have different processes/threads send their information to an "aggregator process", which puts everything together and writes it to the file - the key is that only the aggregator ever writes to the file. Of course, doing that requires you to build in some method of interprocess communication (IPC), and that in turn can be tricky, depending on how you do it. Actually, one of the best ways to implement IPC for simple programs is by using temporary files, which is just the same thing as in the previous paragraph.