Python ''PRAW'' api library crashes script - python

I have been working on a python script intended to help moderate a subreddi using 'warnings'. However, i am unable to get it to work,as when i get to the point in the script where it it supposed to utilise the api, the python window just closes. I have tried using more simple scripts,but non of them work. Here is my simpler file(i used it for testing):
import prawcore
import praw
from os.path import isfile
import praw
import pandas as pd
from time import sleep
# Get credentials from DEFAULT instance in praw.ini
reddit = praw.Reddit()
# begining of script
#login
reddit = praw.Reddit(client_id='this is where i put my user agent',
client_secret='this is where i put my secret',
password='this is wher i put my password',
user_agent='PARS (Python-based Advanced Reddit Reprimand System) by u/veryinterestingnut',
username='PicoModBot')
print(reddit.user.me())
And here is my praw.ini file:
# The URL prefix for regular requests.
reddit_url=https://www.reddit.com
# The URL prefix for short URLs.
short_url=https://redd.it
[DEFAULT]
client_id=my id was here
client_secret=this is where i put my pass
user_agent=PARS (Python-based Advanced Reddit Reprimand System) by u/veryinterestingnut
username=PicoModBot
password=my account pass was here
What can i do to try and fix this? Any help would be much appreciated.

Related

How to download a report as a CSV directly from Salesforce Lightning?

I am creating a Python script for downloading a report from Salesforce as a CSV.
My script is working perfectly fine for Salesforce Classic. However, I need to get it working for Lightning Experience. I'm using the simple-salesforce Python package to access our org. For SF Classic I enter a link that is structured like this: https://my-company.my.salesforce.com/my_report_id?view=d&snip&export=1&enc=UTF-8&xf=csv
The script is basically like this:
from simple-salesforce import Salesforce
import requests
import pandas as pd
import csv
from io import StringIO
sf = Salesforce(username="my_username", password="my_password",
security_token="my_token")
sf_org = "https://my_company.my.salesforce.com/"
report_id = "0000" # Some report id
sf_report_loc = "{0}{1}?view=d&snip&export=1&enc=UTF-8&xf=csv".format(sf_org, report_id)
response = requests.get(sf_report_loc, headers=sf.headers, cookies={"sid": sf.session_id})
new_report = response.content.decode("utf-8")
df = pd.read_csv(StringIO(new_report)) # Save the report to a DataFrame.
Whenever I switch to Lightning, the link is invalid and I get redirected. Is there a way to make this work in Lightning?
Try with isdtp parameter. In classic it was used to force view pages without sidebar or header, for example add isdtp=vw to a random page and see what happens.
https://my_company.my.salesforce.com/00O.....?isdtp=p1&export=1&enc=UTF-8&xf=csv ?
(no idea what's 'p1' but that's what I see in Chrome's download history as part of the report's source URL)

Logging in to website to access data using Python

I have a subscription to the site https://www.naturalgasintel.com/ for daily feeds of data that show up on their site directly as .txt files; their user login page being https://www.naturalgasintel.com/user/login/
For example a file for today's feed is given by the link https://naturalgasintel.com/ext/resources/Data-Feed/Daily-GPI/2019/01/20190104td.txt and shows up on the site like the picture below:
What I'd like to do is to log in using my user_email and user_password and scrape this data in the form of an Excel file.
When I use Twill to try and 'point' me to the data by first logging me into the site I use this code:
from email.mime.text import MIMEText
from subprocess import Popen, PIPE
import twill
from twill.commands import *
year= NOW[0:4]
month=NOW[5:7]
day=NOW[8:10]
date=(year+month+day)
path = "https://naturalgasintel.com/ext/resources/Data-Feed/Daily-GPI/"
end = "td.txt"
go("http://www.naturalgasintel.com/user/login")
fv("2", "user[email]", user_email)
fv("2", "user[password]", user_password)
fv("2", "commit", "Login")
datafilelocation = path + year + "/" + month + "/" + date + end
go(datafilelocation)
However, logging in from the user login page sends me to this referrer link when I go to the data's location.
https://www.naturalgasintel.com/user/login?referer=%2Fext%2Fresources%2FData-Feed%2FDaily-GPI%2F2019%2F01%2F20190104td.txt
Rather than:
https://naturalgasintel.com/ext/resources/Data-Feed/Daily-GPI/2019/01/20190104td.txt
I've tried using modules like requests as well to log in from the site and then access this data but whatever method I use sends me to the HTML source rather than the .txt data location itself.
I've posted my complete walk-through with the Python 2.7 module Twill which I attached a bounty to here:
Using Twill to grab .txt from login page Python
What would the best solution to being able to access these password protected files be?
If you have a compatible version of FireFox for this, then get the plugin javascript 0.0.1 by Chee and add the following to run on the page:
document.getElementById('user_email').value = "E-What";
document.getElementById('user_password').value = " ABC Password ";
Change the email and password as you like. It will load the page, then after that it will put in your username and password.
There are other ways to do this all by yourself with your own stand-alone process. You do not have to download other people's programs and try to learn them (beyond this little thing) if you change it this way.
I would have up voted this question.

Python Proxy Settings

I was using the wikipedia module in which you can get the information that is present about that topic on wikipedia. When I run the code it is unable to connect because of proxy. When I connected PC to a proxy free network it's working. It also happened while using the Beautiful soup module for scraping. I have tried to set environment variable like http://username:password#proxy_url:port but when the run the code in IDLE it's not working. Please help.
It worked:
import os
os.environ["HTTPS_PROXY"] = "http://user_id:pass#proxy:port"
If you don't want to store your password in code file:
import os
pxuser = "your.corporate.domain\\your_username"
pxpass = input(f"Password for {pxuser}: ")
env_px = f"http://{pxuser}:{pxpass}#your_proxy:port"
os.environ["HTTPS_PROXY"] = env_px

How do I submit data to a web form in python?

I'm trying to automate the process of creating an account for something, lets call it X, but I cant figure out what to do.
I saw this code somewhere,
import urllib
import urllib2
import webbrowser
data = urllib.urlencode({'q': 'Python'})
url = 'http://duckduckgo.com/html/'
full_url = url + '?' + data
response = urllib2.urlopen(full_url)
with open("results.html", "w") as f:
f.write(response.read())
webbrowser.open("results.html")
But I cant figure out how to modify it for my use.
I would highly recommend utilizing Selenium+Webdriver for this, since your question appears UI and browser-based. You can install Selenium via 'pip install selenium' in most cases. Here are a couple of good references to get started.
- http://selenium-python.readthedocs.io/
- https://pypi.python.org/pypi/selenium
Also, if this process needs to drive the browser headlessly, look into including PhantomJS (via GhostDriver), which can be downloaded from the phantomjs.org website.

Python cx_Freeze Exe Blogger API authentification issue - blogger.dat and secrets in root directory but posting doesnt work - no errors

I just created a python executable with cx_freeze, there are multiple blog API's like tumblr and all in all it worked out well.
But Google's Blogger API doesnt work. I get the secrets from ouath2 and put them into the directory. Then I start my .exe file and want to work with a google blog, it redirects me to an URL where it asks me to accept access on my blogger data. Until now everything is okay, now it should create a blogger.dat file in the directory with expire token etc.
But it doesnt do anything, it tells me authentification succesfull but doesnt create the blogger.dat file. If you manualy put it in there. it recognizes it, because there is no redirect, but it doesnt post my content and no errors aswell.
thats the directory which I created with cx_Freeze: no blogger.dat is created there even if the authentification is succesfull, putting it in there manualy make google recognize it, but i cannot post
http://puu.sh/k2VK1/109e3cd348.png
Thats the class that I use for authetification: I also tryed to replace doc with "blogger.dat"
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
import sys
from oauth2client import client
from googleapiclient import sample_tools
# Authenticate and construct service.
def google_blogger_zugangsdaten():
global service
service, flags = sample_tools.init(
sys.argv, 'blogger', 'v3', __doc__ , "client_secrets.json",
scope='https://www.googleapis.com/auth/blogger')
def google_blogger_execute(google_blogname, bild_uri_zum_posten, alt_attribut_bild, title_attribut_bild, google_post_title, google_post_content):
blogs = service.blogs()
thisusersblogs = blogs.listByUser(userId='self').execute()
for blog in thisusersblogs['items']:
print('The blog named \'%s\' is at: %s id is: %s' % (blog['name'], blog['url'], blog['id']))
posts = service.posts()
#blog_id = 0
#google_blogname = "Development Blog"
for blog in thisusersblogs['items']:
if blog['name'] == google_blogname:
#blog_id = blog['id']
request = posts.insert(blogId=blog['id'], body = {"title" : google_post_title, "content" : "<img src='" + bild_uri_zum_posten + "' title='" + title_attribut_bild + "' alt='" + alt_attribut_bild + "'><br>" + google_post_content})
request.execute()
Thank you all, my first post here, usualy I only find solutions here. Hope I dont missed something

Categories

Resources