How can I append a Boolean value to a text file? - python

I'm trying to add a boolean value to a text file but get this error:
Traceback (most recent call last):
File "/Users/valentinwestermann/Documents/La dieta mediterranea_dhooks.py", line 32, in <module>
f.write(variant["available"])
TypeError: write() argument must be str, not bool
Does anyone have an idea how to fix this? : )
It is supposed to work as a restock monitor and make a text version of the product availability when the bot launches and then constantly compare it and notifiers the user about product restocks.
import bs4 as bs
import urllib.request
import discord
from discord.ext import commands
from dhooks import Webhook
import requests
import json
r = requests.get("https://www.feature.com/products.json")
products = json.loads((r.text))["products"]
for product in products:
print("============================================")
print(product["title"])
print(product["tags"])
print(product["published_at"])
print(product["created_at"])
print(product["product_type"])
for variant in product["variants"]:
print(variant['title'])
print(variant['available'],"\n")
data =("available")
with open("stock_index.txt","w") as f:
for product in products:
for variant in product["variants"]:
if variant['available'] == True:
f.write(product["title"])
f.write(variant['title'])
print(variant["available"])
f.write("--------------------")

You can convert to a string first:
f.write(str(variant["available"]))

Related

JSON Attribute Error str has no attribute JSON when collecting data off of website

Hello so im making a program to collect specific data off of a website. And im using cloud scraper so i dont get blocked. I have used cloud scraper before and It works perfectly fine. Only thing now is im trying to get the gas guzzlers on etheruem but when i try connecting to opensea I get an error saying.Would anyone know of a solution or an API i can use to get gas guzzlers?
Traceback (most recent call last):
File "main.py", line 9, in <module>
data = response.json()
AttributeError: 'str' object has no attribute 'json'```
Code
import cloudscraper
import json
import re
from json import loads
# Or: scraper = cloudscraper.CloudScraper() # CloudScraper inherits from requests.Session
scraper = cloudscraper.CloudScraper()
response = scraper.get("https://etherscan.io/gastracker").text
data = response.json()
b = json.loads(data)
print(b)````

Importing saved HTML file in Pandas as DataFrame instead of a dict

I have a table saved offline into HTML format. I wish to import it in Pandas and work on it. But pandas imports it as Dict instead of dataframe. Here is my code:
import pandas as pd
import html5lib
option_table = pd.read_html("C:/Users/home-pc/Desktop/operator.html")
print(option_table[['Circle name', 'Code']])
Here is the HTML table which i have saved offline on my computer:enter link description here
The error I get when I run my code is:
Traceback (most recent call last):
File "C:\Users\home-pc\Desktop\offline.py", line 5, in <module>
print(option_table[['Circle name', 'Code']])
TypeError: list indices must be integers or slices, not list
How can I import my offline HTML page as a dataframe instead of a dict.
Looks like it's getting imported as a one-element list. Indexing that first element and then casting it as a DataFrame worked for me:
import pandas as pd
import html5lib
option_table = pd.read_html("https://sinuateainudog.htmlpasta.com/")
option_table_df = pd.DataFrame(option_table[0])
print(option_table_df[['Circle name', 'Code']])

delete image using msg_id in python?

I have a camera whose picture is achieve using the IP address of the camera in the web browser.
I can download the image link then and then download the image to my local system.
After that, I have to delete the image using msg_id, token, parameter.
I added the image link for delete using msg_id.
from time import sleep
import os
import sys
import requests
from bs4 import BeautifulSoup
import piexif
import os
from fractions import Fraction
archive_url = "http://192.168.42.1/SD/AMBA/191123000/"
def get_img_links():
# create response object
r = requests.get(archive_url)
# create beautiful-soup object
soup = BeautifulSoup(r.content,'html5lib')
# find all links on web-page
links = soup.findAll('a')
# filter the link sending with .mp4
img_links = [archive_url + link['href'] for link in links if link['href'].endswith('JPG')]
return img_links
def FileDelete():
FilesToProcess = get_img_links()
print(FilesToProcess)
FilesToProcessStr = "\n".join(FilesToProcess)
for FileTP in FilesToProcess:
tosend = '{"msg_id":1281,"token":%s,"param":"%s"}' %(token, FileTP)
print("Delete successfully")
Getting this error:
NameError: name 'token' is not defined
runfile('D:/EdallSystem/socket_pro/pic/hy/support.py', wdir='D:/EdallSystem/socket_pro/pic/hy')
['http://192.168.42.1/SD/AMBA/191123000/13063800.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13064200.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13064600.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13065000.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13065400.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13065800.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13072700.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13073100.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13073500.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13073900.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13074300.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13074700.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13075100.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13075500.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13075900.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13080300.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13080700.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13081100.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13081500.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13081900.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13082300.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13082700.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13083100.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13083500.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13083900.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13084300.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13084700.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13085100.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13085500.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13085900.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13090300.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13090700.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13091100.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13091500.JPG', 'http://192.168.42.1/SD/AMBA/191123000/13091900.JPG']
Traceback (most recent call last):
File "D:\EdallSystem\socket_pro\pic\hy\support.py", line 82, in <module>
FileDelete()
File "D:\EdallSystem\socket_pro\pic\hy\support.py", line 74, in FileDelete
tosend = '{"msg_id":1281,"token":%s,"param":"%s"}' %( FileTP)
TypeError: not enough arguments for format string

Unable to parse json response with python

So I ran into an issue while importing API data into my code. Any help is much appreciated.
from urllib2 import Request, urlopen, URLError
import json, requests
data = requests.get('https://masari.superpools.net/api/live_stats?update=1522693430318').json()
data_parsed = json.loads(open(data,"r").read())
print data_parsed
I'm still quite new to python, and I ran into this error:
>C:\Users\bot2>python C:\Users\bot2\Desktop\Python_Code\helloworld.py
Traceback (most recent call last):
File "C:\Users\bot2\Desktop\Python_Code\helloworld.py", line 5, in <module>
data_parsed = json.loads(open(data,"r").read())
TypeError: coercing to Unicode: need string or buffer, dict found
data is already received as a json object (which is a dict in this case). Just do the following:
data = requests.get('https://masari.superpools.net/api/live_stats?update=1522693430318').json()
print data
Use data['network'] for example to access nested dictionaries.

grabbing the api dictionary json with python

I was following this tutorial on api grabbing with python:
https://www.youtube.com/watch?v=pxofwuWTs7c
The url gives:
{"date":"1468500743","ticker":{"buy":"27.96","high":"28.09","last":"27.97","low":"27.69","sell":"27.97","vol":"41224179.11399996"}}
I tried to follow the video and grab the 'last' data.
import urllib2
import json
url = 'https://www.okcoin.cn/api/v1/ticker.do?symbol=ltc_cny'
json_obj=urllib2.urlopen(url)
data= json.load(json_obj)
for item in data['ticker']:print item['last']
After typing the last line python returns:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: string indices must be integers
I think you just misread the payload returned by the server. In this case the ticker key is not of type list in the dictionary converted by the json module.
So You should do the following
import urllib2
import json
url = 'https://www.okcoin.cn/api/v1/ticker.do?symbol=ltc_cny'
json_obj = urllib2.urlopen(url)
data = json.load(json_obj)
print data['ticker']['last']

Categories

Resources