I request advice on a Pythonic matter I am confused about. I have a csv file which holds a field value I need to search by on a website. Next, I want to tell Python if you find any matching values corresponding to the field value in the csv file, save the records into a new csv file. Please provide your assistance on how and what modules I can use to accomplish this task. Any assistance would be greatly appreciated.
import requests
import bs4
import csv
r = requests.get('https://etrakit.friscotexas.gov/Search/permit.aspx')
with open ('C:/Users/Desktop/Programming/Addresses.csv') as f:
for row in csv.reader(f):
print row[1]
Related
I have this code to scrape the results from google. If I have a list of terms I need to search in Excel/Csv format, how can I write the code to
After import the excel file, search each row values and print out the results for that row.
Repeat for the next row value in the Excel file.
Here's my code. Please help with any solution you can think of
For example my Excel file just have 1 column and 3 values as below:
List to search
Defuse
Commercial
Ecommerce
from ecommercetools import seo
import csv
import pandas as pd
searching = input('What do you want to search?')
results = seo.get_serps(searching)
df = pd.DataFrame(results.head(20)) # Convert result into data frame.
df.to_csv("ScanOutput.csv",mode="a")
Thank you
I tried with several module but stuck somehow. Any help would be appreciated
If this is the content of your .csv file called file.csv:
a,b,c
1,2,3
k,l,m
then you can read it and loop row by row like so:
import csv
# read file.csv and print each row
with open('file.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
print(row)
This answer doesn't use pandas (but csv which is simpler) which I think is ok until you don't have gigabytes of data or the data is very complex
I have a csv file as follows
id,name,product
1234,Solar,"['id':'89521','status':'Active','productRatePlan':[{'id':'224416','name':'monthly charge', 'pricing': [{'currency':'USD','price': 109.13}]}]]"
I want the csv file like this
id,name,product.id,product.status,productRatePlan.id,productRatePlan.name,productRatePlan.currency,productRatePlan.price
1234,Solar,89521,Active,224416,monthly charge,USD,109.13
I am fetching the data via a http request, then when I open the file using csv.reader(filename), then it is coming in a form of a list that has elements in type of strings which are in the form of list of dictionary
I am facing trouble to access the values and make the value as the column name. The below code is that I have tried.
import csv
with open('newFile.csv') as nf:
csvRead = csv.reader(nf)
for row in csvRead:
# if row == 'product':
# print(row)
print(row[2][2:4])
print(type(row))
I know the approach I am doing is wrong, I have tried many combinations. It will be very helpful if someone guide me and tell me what are the resources I need in order to complete it. Thank you.
I am trying to create a .csv file of UUID numbers. I see how to make a single UUID number in python but can't get the correct syntax to make 50 numbers and save them to a .csv file. I've googled and found many ways to create .csv files and how to use For loop but none seem to pertain to this particular application. Thank you for any help.
Just combine a csv writer with an uuid generator
import csv
import uuid
with open('uuids.csv', 'w') as csvfile:
uuidwriter = csv.writer(csvfile)
for i in range(50):
uuidwriter.writerow([uuid.uuid1()])
a csv is basically a text file, and since yours have only one column you won't need separators :
import uuid
with open('uuids.csv', 'w') as f:
f.writelines(str(uuid.uuid1()) + "\n" for i in range(50))
import urllib
import json
import re
import csv
from bs4 import BeautifulSoup
game_code = open("/Users//Desktop/PYTHON/gc.txt").read()
game_code = game_code.split("\r")
for gc in game_code:
htmltext =urllib.urlopen("http://cluster.leaguestat.com/feed/index.php?feed=gc&key=f109cf290fcf50d4&client_code=ohl&game_id="+gc+"&lang_code=en&fmt=json&tab=pxpverbose")
soup= BeautifulSoup(htmltext, "html.parser")
j= json.loads(soup.text)
summary = ['GC'],['Pxpverbose']
for event in summary:
print gc, ["event"]
I can not seem to access the lib to print the proper headers and row. I ultimately want to export specific rows to csv. I downloaded python 2 days ago, so i am very new. I needed this one data set for a project. Any advice or direction would be greatly appreciated.
Here are a few game codes if anyone wanted to take a look. Thanks
21127,20788,20922,20752,21094,21196,21295,21159,21128,20854,21057
Here are a few thoughts:
I'd like to point out the excellent requests as an alternative to urllib for all your HTTP needs in Python (you may need to pip install requests).
requests comes with a built-in json decoder (you don't need BeautifulSoup).
In fact, you have already imported a great module (csv) to print headers and rows of data. You can also use this module to write the data to a file.
Your data is returned as a dictionary (dict) in Python, a data structure indexed by keys. You can access the values (I think this is what you mean by "specific rows") in your data with these keys.
One of many possible ways to accomplish what you want:
import requests
import csv
game_code = open("/Users//Desktop/PYTHON/gc.txt").read()
game_code = game_code.split("\r")
for gc in game_code:
r = requests.get("http://cluster.leaguestat.com/feed/index.php?feed=gc&key=f109cf290fcf50d4&client_code=ohl&game_id="+gc+"&lang_code=en&fmt=json&tab=pxpverbose")
data = r.json()
with open("my_data.csv", "a") as csvfile:
wr = csv.writer(csvfile,delimiter=',')
for summary in data["GC"]["Pxpverbose"]:
wr.writerow([gc,summary["event"]])
# add keys to write additional values;
# e.g. summary["some-key"]. Example:
# wr.writerow([gc,summary["event"],summary["id"]])
You don't need beautiful soup for this; the data can be read directly from the URL into JSON format.
import urllib, json
response = urllib.urlopen("http://cluster.leaguestat.com/feed/index.php?feed=gc&key=f109cf290fcf50d4&client_code=ohl&game_id=" + gc +"&lang_code=en&fmt=json&tab=pxpverbose")
data = json.loads(response.read())
At this point, data is the parsed JSON of your web page.
Excel can read csv files, so easiest route would be exporting the data you want into a CSV file using this library.
This should be enough to get you started. Modify fieldnames to include specific event details in the columns of the csv file.
import csv
with open('my_games.csv', 'w') as csvfile:
fieldnames = ['event', 'id']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames,
extrasaction='ignore')
writer.writeheader()
for event in data['GC']['Pxpverbose']:
writer.writerow(event)
I am a beginner at Python and I'm looking to take 3 specific columns starting at a certain row from a .csv spreadsheet and then import each into python.
For example
I would need to take 1000 rows worth of data from column F starting at
row 12.
I've looked at options using cvs and pandas but I can't figure out how
to have them start importing at a certain row/column.
Any help would be greatly appreciated.
If the spreadsheet is not huge, the easiest approach is to load the entire CSV file into Python using the csv module and then extract the required rows and columns. For example:
import csv
rows = list(csv.reader(file('Book1.csv', 'rb')))
data = [column[5] for column in rows[11:11+1000]]
will do the trick. Remember that Python starts numbering from 0, so column[5] is column F from your spreadsheet and rows[11] is row 12.
CSV files being text files, there is no way to read a certain line. You will have to read line per line, and count... Have a look at the csv module in Python, which will explain how to (easily) read lines. Particularly this section.