Trying to check my .JSON file for a users ID - python

So my plan is to search the JSON file first and check if the user's ID is in there, if it is it should respond with "You have already registered.", if not then it should continue with the rest of the code.(which is where "data" will be added)
So far I only managed to add the "data" to the JSON file but couldn't figure out how to search for the ID eventually deleting the rest of it to find a solution.
#client.command(aliases = ['Register'])
async def register(ctx):
data = ctx.author.id, currency, Class, Cards
with open('Player_Stats.json', 'w') as f:
json.dump(data, f)
await ctx.send("Account registered!")

You should prefer a database query over a JSON file lookup. If you still want to go with the file you should probably detail the file format. Anyway you will load the file and search for the key or value on the JSON.

Related

How i add another list in json python

my json file :
{
"ali":{"name":"ali","age":23,"email":"his email"},
"joe":{"name":"joe","age":55,"email":"his email"}
}
And my code
name=input("name:")
age=input("age: ")
email=input("email:")
list={}
list[name]={"name":name,"age":age,"email":email}
data=json.dumps(list)
with open ('info.json','a') as f:
f.write(data)
i need method to append anather one (another name)to the json file
any idea?
To update an existing json file you need to read the entire file, make adjustments and write the whole lot back again:
with open ('info.json','r') as f:
data = json.load(f)
name = input("name:")
age = input("age: ")
email = input("email:")
data[name] = {"name":name, "age":age, "email":email}
with open ('info.json','w') as f:
json.dump(data, f)
By the way, there are no lists involved here, just nested dictionaries.
Also, if the user enters a duplicate name, then this code will overwrite the one in the file with updated data. This may, or may not, be what you want.

Suggestions on making the code read the csv files

So I have this code for a discord bot that allows moderation staff to go on leave and then return, the command takes away their moderation role and gives them a leave role, however there are seniority levels of staff, so each have their own command for it.
The code is supposed to log the ID of the user who used that command into a specific CSV file, however on the returning command I don't believe it's storing or reading the file for the id, i.e. a moderator can use their command to go on leave however they can then use an admin return command giving them an admin role.
I believe this is because the code isn't reading or writing in the desired files, this is the code I have;
#client.command()
#commands.has_role('Moderator')
async def slm(ctx, member: discord.Member = None):
if not member:
member = ctx.author
loa = ctx.guild.get_role(848032714715561985)
mod = ctx.guild.get_role(848032880709074944)
await member.add_roles(loa)
await member.remove_roles(mod)
file = open("modRecord.csv", "w")
file.write(str(ctx.author.id))
file.close()
await ctx.send("I have filed your Leave, take care, we look forward to your return!")
#client.command()
async def srm(ctx, member: discord.Member = None):
if not member:
member = ctx.author
mod = ctx.guild.get_role(848032880709074944)
loa = ctx.guild.get_role(848032714715561985)
found = False
with open('modRecord.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[0] == str(ctx.author.id):
found = True
break
if found is False:
await member.add_roles(mod)
await member.remove_roles(loa)
await ctx.send("Welcome back!")
else:
await ctx.send("We do not have history of you having a Moderator role.")
the command for the other moderator roles are the same except a different file to store their id's, each command has their own file essentially.
Could anyone tell me why it's not reading or writing in the files, or suggest a better recording system?
A Database is indeed a better way for storing such data, but there is no such problem in using CSV. I found 1 mistake that every time you are storing member/author id you are using "Write" mode while opening it.
file = open("modRecord.csv", "w")
file.write(str(ctx.author.id))
This clears all previous data present in your csv and then write new one.
Here are are some other possible ways to it:
Use Append Mode: By this you can add the id at the last of the file.
file = open("modRecord.csv", 'a')
file.write(str(ctx.author.is))
file.close()
Use pandas: Pandas is a python library, easily capable to read/write csv files. Use it treat csv like a spreadsheet. There are other libraries too like Openpyxl. But I prefer pandas.
First Read and then Write file: First read the file, store it's contents in a variable, make any iterations that you want to make in the content like replacing any row, deleting any row, adding a row in between two rows, etcetera and then finally write the variable again into the file.
file=open('modRecord.csv','r')
file_content=file.read()
make changes in file_content
new_file=open('modRecord.csv','w')
new_file.write(file_content)
file.close()
new_file.close()

how to modify use my function to retrieve tweets

hi guys so i am working on a personal project in which i was searching for tweets containing specific keywords. I collected about 100 recent tweets for each of the keywords and saved them to variable x1_tweets, x2_tweets and x3_tweets. The data is basically a list of dictionaries and the fields look like this:
['created_at', 'id', 'id_str', 'text', 'truncated', 'entities', 'metadata', 'source', 'in_reply_to_status_id', 'in_reply_to_status_id_str', 'in_reply_to_user_id', 'in_reply_to_user_id_str', 'in_reply_to_screen_name', 'user', 'geo', 'coordinates', 'place', 'contributors', 'is_quote_status', 'retweet_count', 'favorite_count', 'favorited', 'retweeted', 'lang']
i then wanted to save the tweets(just the text) from each of the variables to json file. for that i defined a function(the function saves a list of dictionaries to a json file, obj being the list of dictionaries and filename being the name i want to save it as):
def save_to_json(obj, filename):
with open(filename, 'w') as fp:
json.dump(obj, fp, indent=4, sort_keys=True)
In order to get only the tweets i implemented the following code:
for i, tweet in enumerate(x1_tweets):
save_to_json(tweet['text'],'bat')
However i have had no success thus far, can anyone please guide me to the right direction? thanks in advance!
edit: I am using twitterAPI
First thing you need to do is change the below code as:
def save_to_json(obj, filename):
with open(filename, 'a') as fp:
json.dump(obj, fp, indent=4, sort_keys=True)
You need to change the mode in which file is open because of the below reason.
w: Opens in write-only mode. The pointer is placed at the beginning of the file and this will overwrite any existing file with the same name. It will create a new file if one with the same name doesn't exist.
a: Opens a file for appending new information to it. The pointer is placed at the end of the file. A new file is created if one with the same name doesn't exist.
Also, there is no meaning of sort_keys as you are only passing a string and not a dict. Similarly, there is no meaning of indent=4 for strings.
If you need some indexing with the tweet text you can use the below code:
tweets = {}
for i, tweet in enumerate(x1_tweets):
tweets[i] = tweet['text']
save_to_json(tweets,'bat.json')
The above code will create a dict with index to the tweet and write to the file once all tweets are processed.

How would i delete something from a json file

I'm not sure how to delete something from a .json file
I've tryed looking it up and it still nothing :(
#bot.command()
async def afkremoveme(ctx):
#pls help me I'm lost!
no errors
I'm not sure what you want your command to do, but here's an example of how you would implement json into discord.py.
Here, whenever the command is executed, the bot opens a json file, reads the data, and sees if the message author is in the data. If the author is in the data, the key/value pair is deleted, and the data is rewritten into the json file:
import json
#bot.command()
async def afkremoveme(ctx):
f = "yourFile.json"
author = str(ctx.author.id)
with open(f, "r") as read_file:
data = json.load(read_file)
if author in data: # if any key in the dictionary is an integer, it is converted to a string when a json file is written
del data[author]
newData = json.dumps(data, indent=4)
with open(f, "w") as write_file:
write_file.write(newData)
await ctx.send(f"{ctx.author.display_name} is no longer afk...")
This is reading a json file that looks like this (replace 000000 with your id):
{
"000000" : "afk",
"someOtherGuy" : "afk"
}
All of this uses dictionaries and the json module. If you're unfamiliar with either of the concepts, here are a few links to help you out :-)
Python Dictionaries,
Python-Json

Load JSON file into a dictionary and not string or list

I have created a JSON file after scraping data online with the following simplified code:
for item in range(items_to_scrape)
az_text = []
for n in range(first_web_page, last_web_page):
reviews_html = requests.get(page_link)
tree = fromstring(reviews_html.text)
page_link = base_url + str(n)
review_text_tags = tree.xpath(xpath_1)
for r_text in review_text_tags:
review_text = r_text.text
az_text.append(review_text)
az_reviews = {}
az_reviews[item] = az_text
with open('data.json', 'w') as outfile:
json.dump(az_reviews , outfile)
There might be a better way to create a JSON file with the first key equal to the item number and the second key equal to the list of reviews for that item, however I am currently stuck at opening the JSON file to see the items have been already scraped.
The structure of the JSON file looks like this:
{
"asin": "0439785960",
"reviews": [
"Don’t miss this one!",
"Came in great condition, one of my favorites in the HP series!",
"Don’t know how these books are so good and I’ve never read them until now. Whether you’ve watched the movies or not, read these books"
]
}
The unsuccessful attempt that seems to be closer to the solution is the following:
import json
from pprint import pprint
json_data = open('data.json', 'r').read()
json1_file = json.loads(json_data)
print(type(json1_file))
print(json1_file["asin"])
It returns a string that replicates exactly the result of the print() function I used during the scraping process to check what the JSON file was going to be look like, but I can't access the asins or reviews using json1_file["asin"] or json1_file["reviews"] since the file read is a string and not a dictionary.
TypeError: string indices must be integers
Using the json.load() function I still print the right content, but I have cannot figure out how to access the dictionary-like object from the JSON file to iterate through keys and values.
The following code prints the content of the file, but raises an error (AttributeError: '_io.TextIOWrapper' object has no attribute 'items') when I try to iterate through keys and values:
with open('data.json', 'r') as content:
print(json.load(content))
for key, value in content.items():
print(key, value)
What is wrong with the code above and what should be adjusted to load the file into a dictionary?
string indices must be integers
You're writing out the data as a string, not a dictionary. Remove the dumps, and only dump
with open('data.json', 'w') as outfile:
json.dump(az_reviews, outfile, indent=2, ensure_ascii=False)
what should be adjusted to load the file into a dictionary?
Once you're parsing a JSON object, and not a string, then nothing except maybe not using reads, then loads and rather only json.load
Another problem seems to be that you're overwriting the file on every loop iteration
Instead, you probably want to open one file then loop and write to it afterwards
data = {}
for item in range(items_to_scrape):
pass # add to data
# put all data in one file
with open('data.json', 'w') as f:
json.dump(data, f)
In this scenario, I suggest that you store the asin as a key, with the reviews as values
asin = "123456" # some scraped value
data[asin] = reviews
Or write a unique file for each scrape, which you then must loop over to read them all.
for item in range(items_to_scrape):
data = {}
# add to data
with open('data{}.json'.format(item), 'w') as f:
json.dump(data, f)

Categories

Resources