I am working on a simple gui with pyqt and trying to update my database without any luck so far. I am pretty new to qt, database and well also python ;) so I don't understand whats wrong with my code, maybe some one could help me a little bit.
While working with python (3) and sqltie3 I could update my db like this:
cur.execute('''UPDATE news SET Raw=? WHERE id=?''', (raw, news_id))
But with pyqt it's only working like this:
query = QSqlQuery()
query.exec_("UPDATE news SET Raw = 'test' WHERE Id = 9")
I tried:
query.exec_("UPDATE news SET Raw = ? WHERE Id = ?", (raw, news_id))
What lead to this error:
TypeError: arguments did not match any overloaded call:
QSqlQuery.exec_(str): too many arguments
QSqlQuery.exec_(): too many arguments
In the book rapid gui programming with qt he is doing his queries like this (not exactly like that but I tried to adapt it):
query.exec_('''UPDATE news SET Raw={0} WHERE id={1}'''.format(raw, news_id))
It doesn't seem to do anything.
and (doesn't seem to do anything either):
raw = 'test'
query.prepare("UPDATE news SET (Raw) VALUES (:raw) WHERE Id = 9 ")
query.bindValue(":raw", raw)
#query.bindValue(":news_id", 9)
query.exec_()
Well and some other things I have found here and elsewhere but without any luck so far.
It seems that you have not yet tried the proper syntax which should be the following:
query.prepare(QString("UPDATE news SET Raw = :value WHERE id = :id "));
query.bindValue(":value", raw);
query.bindValue(":id", news_id);
Related
At my work, I have two SQL tables, one is called jobs, with string attributes, job and codes. The latter is called skills with string attributes code and skill.
job code
--- ----
j1 s0001,s0003
j2 s0002,20003
j3 s0003,s0004
code skills
----- ------
s0001 python programming language
s0002 oracle java
s0003 structured query language sql
s0004 microsoft excel
What my boss wants me to do is: Take values from the attribute code in jobs, split the string into an array, join this array on the codes (from skills table) and return the query in the format of job skills like:
job skills
--- ------
j1 python programming language,structured query language sql
At this point, I'd just like to know if (A) this is possible and (B) if there is a preferred alternative to this approach. I've listed my python solution, using dictionaries, below to illustrate my the concept:
jobs = {'j1':'s0001,s0003',
'j2':'s0002,20003',
'j3':'s0003,s0004'}
skills = {'s0001':'python programming language',
's0002':'oracle java',
's0003':'structured query language sql',
's0004':'microsoft excel'}
job_skills = {k:[] for k in jobs.keys()}
for j,s in jobs.items():
for code,skill in skills.items():
for i in s.split(','):
if i == code:
job_skills[j].append(skill)
for k,v in job_skills.items():
job_skills[k] = ','.join(v)
And the output:
{'j1': 'python programming language,structured query language sql',
'j2': 'oracle java',
'j3': 'structured query language sql,microsoft excel'}
The real crux of this problem is that there aren't just 4 different skills in our data. Our company's data includes ~5000 skills. My boss would greatly like to avoid creating a table with 5000 attributes, 1 for each skill; he believes the above approach will result in simpler queries, with potentially better memory management.
I'm still pretty new to SQL, and technically only do SQLite3 anyway so the best I can probably do is a Python solution. I'll tell you how I would solve it, and hopefully someone can come along and fix it, because doing things purely in SQL is vastly faster than ever using Python.
I'm going to assume that this is SQLite, because you tagged Python. If it's not, there's probably ways to convert the database to the .db format in order to use that if you prefer this solution.
I'm assuming that conn is your connection to the database conn = sqlite3.connect(your_database_path) or a cursor for it. I don't use cursors, but it's almost certainly better practice to use them.
First, I would fetch the 'skills' table and convert it to a dict. I would do so with:
skills_array = conn.execute("""SELECT * FROM skills""")
skills_dict = dict()
#replace i with something else. I just did it so that I could use 'skill' as a variable
for i in skills_array:
#skills array is an iterator of tuples, which means the first position is the code number, and the second position is the skill itself
code = i[0]
skill = i[1]
skills_dict[code] = skill
There's probably better ways to do this. If it's important, I recommend researching them. But if it's a one time thing this will work just fine. All this is doing is making giving an easy way to look up skills given a code. You could do this dozens of ways. You didn't mention it being a particularly large database, so this should be fine.
Before the next part, something should be mentioned about SQLite. It has very limited table modifying mechanics-- something I coincidentally found out about today. The recommended method is just to create a new table instead of trying to finagle with an old one. But there are easy ways to modify them with SQLiteBrowser-- something I highly recommend you use. At the very least it's much easier to view info in it for me, and it's available on all the important OS's.
Second, we need to combine the job table and the skills dict. There are much better ways to go about it, but I chose the easy approach. Delimiting the job.skills column by commas and going from there. I'll also create a new table, and insert directly to there.
conn.execute("""CREATE TABLE combined (job TEXT PRIMARY KEY, skills text)""")
conn.commit()
job_array = conn.execute("""SELECT * FROM jobs""")
for i in job_array:
job = i[0]
skill = i[1]
for code in skill.split(","):
skill.replace(code, skills_dict[code])
conn.execute("""INSERT INTO combined VALUES (?, ?)""", (job, skill,))
conn.commit()
And to combine it all...
import sqlite3
conn = sqlite3.connect(your_database_path)
skills_array = conn.execute("""SELECT * FROM skills""")
skills_dict = dict()
#replace i with something else. I just did it so that I could use 'skill' as a variable
for i in skills_array:
#skills array is an iterator of tuples, which means the first position is the code number, and the second position is the skill itself
code = i[0]
skill = i[1]
skills_dict[code] = skill
conn.execute("""CREATE TABLE combined (job TEXT PRIMARY KEY, skills text)""")
conn.commit()
job_array = conn.execute("""SELECT * FROM jobs""")
for i in job_array:
job = i[0]
skill = i[1]
for code in skill.split(","):
skill.replace(code, skills_dict[code])
conn.execute("""INSERT INTO combined VALUES (?, ?)""", (job, skill,))
conn.commit()
To explain a little further if you/someone is confused on the job_array for loop:
Splitting skills allows you to see each individual code, meaning that all you have to do is replace every instance of the code being looked up with the corresponding skill.
And that's it. There's probably a mistake or two in the above code, so I would backup your database/tables before trying it, but this should work. One thing that you might find helpful are context managers, that would make it far more Pythonic. If you plan to use this consistently (for some strange reason), refactoring for speed and readability may also be prudent.
I would also like to believe that there's an SQLite only approach, since this is exactly what databases are made for.
Hope this helps. If it did, let me know. :>
P.S. If you're confused by something/want more explanation feel free to comment.
Using:
Python3
SQLite
TKinter
I am currently trying to create a function to search for a keyword in a database, but as soon as I try to combine it with TKinter, it seems to fail.
Here are the relevant lines:
(I tried it in a lot of different ways, those 3 lines below seem to work with variables, but not with the input from TKinter, so I thought they might actually work, if I edit them a little.
The problem I got is, that I'm not experienced in TKinter and SQLite yet and worked with those 2 for about 3 days yet.
def searcher(column):
#Getting input from user (TKinter)
keyword = tk.Entry(self)
keyword.pack()
#Assigning the input to a variable
kword = keyword.get()
c.execute("SELECT * FROM my_lib WHERE {kappa} LIKE {%goal%}".format(kappa=column, goal=kword))
#c.execute("SELECT * FROM my_lib WHERE "+column+"=?", (kword,))
#c.execute("SELECT * FROM my_lib WHERE {} LIKE '%kword%'".format(column))
I want to check if any of the data CONTAINS the keyword, so basically:
k_word in column_data
and not
column_data == k_word
My question is:
Is there a way to take the user input (by TKinter) and search in the database (SQLite) and check, if any data in the database contains the keyword.
The SQLite docs explain that you can use ? as a placeholder in the query string, which allows you so substitute in a tuple of values. They also advise against ever assembling a full query using variables with Python's string operations (explained below):
c.execute("SELECT * FROM my_lib WHERE ? LIKE ?", (column, '%'+kword+'%'))
You can see above that I concatenated the % with kword, which will get substituted into the second ?. This also is secure, meaning it will protect against SQL Injection attacks if needed.
Docs: https://docs.python.org/2/library/sqlite3.html
Related posts: Escaping chars in Python and sqlite
Try:
kword = "%"+kword+"%"
c.execute("SELECT * FROM my_lib WHERE kappa LIKE '%s'" % kword)
After trying over and over again, I got the actual solution. It seems like I can't just add the '%' to the variable like a string, but rather:
c.execute("SELECT * FROM my_lib WHERE {} LIKE '%{}%'".format(column, kword))
I am having an issue figuring out how to start a query on the OpportunityFieldHistory from Salesforce.
The code I usually use and works for querying Opportunty or Leads work fine, but I do not know how should be written for the FieldHistory.
When I want to query the opportunity or Lead I use the following:
oppty1 = sf.opportunity.get('00658000002vFo3')
lead1 = sf.lead.get('00658000002vFo3')
and then do the proper query code with the access codes...
The problem arises when I want to do the analysis on the OpportunityFieldHistory, I tried the following:
opptyhist = sf.opportunityfieldhistory.get('xxx')
Guess what, does not work. Do you have any clue on what should I write between sf. and .get?
Thanks in advance
Looking at the simple-salesforce API, it appears that the get method accepts an ID which you are passing correctly. However, a quick search in the Salesforce API reference seems to indicate that the OpportunityFieldHistory may need to be obtained by another function such as get_by_custom_id(self, custom_id_field, custom_id).
(OpportunityFieldHistory){
Id = None
CreatedDate = 2012-08-27 12:00:03
Field = "StageName"
NewValue = "3.0 - Presentation & Demo"
OldValue = "2.0 - Qualification & Discovery"
OpportunityId = "0067000000RFCDkAAP"
},
I have created a database to store NGS sequencing results. It consists of 17 tables to store all of the information. The results are stored in spreadsheets which I parse data from and store in variables using python (2.7), and then use the python package mysqldb to insert data into the database. I mainly use functions to obtain the information i need in variables, then write a loop in which I call this function followed by a 'try:' statement to insert. Here is a simple example:
def sample_processer(file):
my_file = open(file, 'r+')
samples = []
for line in my_file:
...get info...
samples.append(line[0])
return(samples)
samples = sample_processor('path/to/file')
for sample in samples:
try:
sql = "samsql = "INSERT IGNORE INTO sample(sample_id, diagnosis, screening) VALUES ("
samsql = samsql + "'"+sample+"'," +sam_screen_dict.get(sample)+"')"
except e:
db.rollback()
print("Something went wrong inserting data into the sample table: %s" %(e))
*sam_screen_dict is a dictionary i made from another function.
This is a simple table that I upload into but many of them call of different dictionaries to make sure the correct results are uploaded. However I was wondering whether there would be a more robust way in which to do this using a class.
For example, my sample_id has an associated screening attribute in the sample table, so this is easy to do with one dictionary. I have more complex junction tables, such as the table in which the sample_id, experiment_id and found mutation are stored, alongside other data, would it be a good idea to create a class for this table, calling on a simple 'sample' class to inherit from? That way I would always know that the results being inserted will be for the correct sample/experiment etc.
Also, using classes could I write rules for each attribute so that if the source spreadsheet is for some reason incorrect, it will not insert into the database?
I.e: sample_id is in the format A123/16. Therefore using a class it will check that the first character is 'A' and that sample_id[-3] should always == '/'. I know I could write these into functions, but I feel like it would take up so much space and time writing so many 'if' statements, that if it is stored once in a class then this would be alot better.
Has anybody done anything similar using classes to pass through their variables to test that they are correct before it gets to the insert stage and an error is created?
I am new to python classes and understand the basics, still trying to get to grips with them so a point in the right direction would be great - as would any help on how to go about actually writing the code for a python class that would be used to make a more robust database insertion program.
17tables it means you may use about 17 classes.
Use a simple script. webpy.db
https://github.com/webpy/webpy/blob/master/web/db.py just modify few code.
Then you can use webpy api: http://webpy.org/docs/0.3/api#web.db to finish your job.
Hope it's useful for you
Suppose that I have a table like:
class Ticker(Entity):
ticker = Field(String(7))
tsdata = OneToMany('TimeSeriesData')
staticdata = OneToMany('StaticData')
How would I query it so that it returns a set of Ticker.ticker?
I dig into the doc and seems like select() is the way to go. However I am not too familiar with the sqlalchemy syntax. Any help is appreciated.
ADDED: My ultimate goal is to have a set of current ticker such that, when new ticker is not in the set, it will be inserted into the database. I am just learning how to create a database and sql in general. Any thought is appreciated.
Thanks. :)
Not sure what you're after exactly but to get an array with all 'Ticker.ticker' values you would do this:
[instance.ticker for instance in Ticker.query.all()]
What you really want is probably the Elixir getting started tutorial - it's good so take a look!
UPDATE 1: Since you have a database, the best way to find out if a new potential ticker needs to be inserted or not is to query the database. This will be much faster than reading all tickers into memory and checking. To see if a value is there or not, try this:
Ticker.query.filter_by(ticker=new_ticker_value).first()
If the result is None you don't have it yet. So all together,
if Ticker.query.filter_by(ticker=new_ticker_value).first() is None:
Ticker(ticker=new_ticker_value)
session.commit()