I have a table of keywords that looks like this:
CREATE TABLE `keywords` (
`keyword` VarChar( 48 ) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
`id` Int( 11 ) AUTO_INCREMENT NOT NULL,
`blog_posts_fulltext_count` Int( 11 ) NOT NULL,
PRIMARY KEY ( `id` ) )
I also have a table of blog posts that looks like this:
CREATE TABLE `blog_posts` (
`id` Int( 11 ) AUTO_INCREMENT NOT NULL,
`title` LongText CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NULL,
`summary` LongText CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NULL,
PRIMARY KEY ( `id` ) );
CREATE FULLTEXT INDEX `title_summary_fulltext` ON `blog_posts`( `title`, `summary` );
As you can see, I have a full text index on the fields title, summary in blog_posts.
The following search works correctly:
select count(*) from blog_posts where match(title,summary) against ('paid');
Now I'd like to populate the field keywords.blog_posts_fulltext_count with the number of rows in blog_posts that the keyword appears in.
When I run this:
keywords = Keywords.objects.all()
for the_keyword in keywords:
query = "select count(id) from BlogPosts where match(title,summary) against ('{0}')".format(the_keyword.keyword)
number_of_mentions = blog_posts.objects.raw(query)
for obj in number_of_mentions:
a = obj
...the RawQuerySet number_of_mentions appears to return without errors, and number_of_mentions.query contains:
'select count(id) from blog_posts where match(title,summary) against ('paid')'
But when the code runs the for obj in number_of_mentions line, it throws:
raise InvalidQuery('Raw query must include the primary key')
I've also tried defining the query string as:
query = "select count('id') from BlogPosts where match(title,summary) against ('{0}')".format(the_keyword.keyword)
...and as:
query = "select count(*) from BlogPosts where match(title,summary) against ('{0}')".format(the_keyword.keyword)
...with the same error message resulting.
What is the correct way to get a result from a raw sql COUNT command in Django?
When you use blog_posts.objects.raw(), Django expects the raw query to somehow return blog_posts objects. But your count query will return a single number instead of a collection of objects. That's the API you see documented here.
If you want to run a query that will not return model objects, but something different (like a number), you have to use the method described in another section of that same page — Executing custom SQL directly.
The general idea is that you'll have to use a cursor (something that iterates over a database resultset) and get it's only result. The following example should give you an idea of how to do it.
from django.db import connection
with connection.cursor() as cursor:
cursor.execute("select count(id) from BlogPosts where match(title,summary) against (%s)", [the_keyword.keyword])
# get a single line from the result
row = cursor.fetchone()
# get the value in the first column of the result (the only column)
count_value = row[0]
Related
i have an sql insert query that take values from user input and also insert the ID from another table as foreign key. for this is write the below query but it seems not working.
Status_type table
CREATE TABLE status_type (
ID int(5) NOT NULL,
status varchar(50) NOT NULL
);
info table
CREATE TABLE info (
ID int(11) NOT NULL,
name varchar(50), NULL
nickname varchar(50), NULL
mother_name varchar(50), NULL
birthdate date, NULL
status_type int <==this must be the foreign key for the status_type table
create_date date
);
for the user he has a dropdownlist that retrieve the value from the status_type table in order to select the value that he want to insert into the new record in the info table
where as the info table take int Type because i want to store the ID of the status_type and not the value
code:
query = '''
INSERT INTO info (ID,name,nickname,mother_name,birthdate,t1.status_type,created_date)
VALUES(?,?,?,?,?,?,?)
select t2.ID
from info as t1
INNER JOIN status_type as t2
ON t2.ID = t1.status_type
'''
args = (ID,name,nickname,mother_name,db,status_type,current_date)
cursor = con.cursor()
cursor.execute(query,args)
con.commit()
st.success('Record added Successfully')
the status_type field take an INT type (the ID of the value from another table ).
So when the user insert it insert the value.
What i need is to convert this value into its corresponding ID and store the ID
based on the answer of #Mostafa NZ I modified my query and it becomes like below :
query = '''
INSERT INTO info (ID,name,nickname,mother_name,birthdate,status_type,created_date)
VALUES(?,?,?,?,?,(select status_type.ID
from status_type
where status = ?),?)
'''
args = (ID,name,nickname,mother_name,db,status_type,current_date)
cursor = con.cursor()
cursor.execute(query,args)
con.commit()
st.success('Record added Successfully')
When creating a record, you can do one of these ways.
Receive as input from the user
Specify a default value for the field
INSERT INTO (...) VALUES (? ,? ,1 ,? ,?)
Use a select in the INSERT
INSERT INTO (...) VALUES (? ,? ,(SELECT TOP 1 ID FROM status_type ODER BY ID) ,? ,?)
When INSERT data, you can only enter the names of the destination table fields. t1.status_type is wrong in the following line
INSERT INTO info (ID,name,nickname,mother_name,birthdate,t1.status_type,created_date)
I'm getting classes for the tables in the DB as follows:
import sqlalchemy as sa
import sqlalchemy.ext.automap
eng = sa.create_engine(CONNECTION_URL)
Base = sa.ext.automap.automap_base()
Base.prepare(eng, reflect=True)
Session = sa.orm.sessionmaker(bind=eng)
Table1 = Base.classes.Table1
In my case Table1 is system versioned which I understand sqlalchemy doesn't explicitly support.
When running the following code:
t = Table1(field1=1, field2=3)
with Session() as session:
session.add(t)
session.commit()
I get the following error:
[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot insert explicit value into a GENERATED ALWAYS column in table 'DBName.dbo.Table1'. Use INSERT with a column list to exclude the GENERATED ALWAYS column, or insert a DEFAULT into GENERATED ALWAYS column. (13536) (SQLExecDirectW);
I understand this probably has to do with the ValidTo and ValidFrom columns
Table1.__table__.columns.keys()
# Column('ValidFrom', DATETIME2(), table=<Table1>, nullable=False)
# Column('ValidTo', DATETIME2(), table=<Table1>, nullable=False)
How do I tell sqlalchemy to ignore those columns during the insert statement?
EDIT
I'm guessing the below is the relevant part of the create statement?
CREATE TABLE [dbo].[Table1]
[TableID] [int] NOT NULL IDENTITY,
...
[ValidFrom] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL,
[ValidTo] [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL
I've got this code below working using a sqlalchemy.
CREATE TABLE dbo.Customer2
(
Id INT NOT NULL PRIMARY KEY CLUSTERED,
Name NVARCHAR(100) NOT NULL,
StartTime DATETIME2 GENERATED ALWAYS AS ROW START
NOT NULL,
EndTime DATETIME2 GENERATED ALWAYS AS ROW END
NOT NULL ,
PERIOD FOR SYSTEM_TIME (StartTime, EndTime)
)
WITH(SYSTEM_VERSIONING=ON (HISTORY_TABLE=dbo.CustomerHistory2))
If the StartTime / EndTime columns are hidden (which these arent't) then a value isn't needed in the insert statement you can add just the required. However the date columns in my table are required, so using default.
sql = "INSERT INTO dbo.Customer2 VALUES (2,'Someone else', default,default)"
print(sql)
with engine.connect() as con:
rs = con.execute(sql)
Extension from previous question
Attempting to insert SQL values into database after pulling from XML file, but none seem to be appearing in database after insert statement embedded in Python code. Without the SQL section included, the entries are printed as expected. I am not getting an error in my Python environment (Anaconda Navigator), so totally lost on how the queries were processed, but nothing was entered! I tried a basic select statement to display the table, but get an empty table back.
Select Query
%sql SELECT * FROM publication;
Main Python code
import sqlite3
con = sqlite3.connect("publications.db")
cur = con.cursor()
from xml.dom import minidom
xmldoc = minidom.parse("test.xml")
#loop through <pub> tags to find number of pubs to grab
root = xmldoc.getElementsByTagName("root")[0]
pubs = [a.firstChild.data for a in root.getElementsByTagName("pub")]
num_pubs = len(pubs)
count = 0
while(count < num_pubs):
#get data from each <pub> tag
temp_pub = root.getElementsByTagName("pub")[count]
temp_ID = temp_pub.getElementsByTagName("ID")[0].firstChild.data
temp_title = temp_pub.getElementsByTagName("title")[0].firstChild.data
temp_year = temp_pub.getElementsByTagName("year")[0].firstChild.data
temp_booktitle = temp_pub.getElementsByTagName("booktitle")[0].firstChild.data
temp_pages = temp_pub.getElementsByTagName("pages")[0].firstChild.data
temp_authors = temp_pub.getElementsByTagName("authors")[0]
temp_author_array = [a.firstChild.data for a in temp_authors.getElementsByTagName("author")]
num_authors = len(temp_author_array)
count = count + 1
#process results into sqlite
pub_params = (temp_ID, temp_title)
cur.execute("INSERT INTO publication (id, ptitle) VALUES (?, ?)", pub_params)
journal_params = (temp_booktitle, temp_pages, temp_year)
cur.execute("INSERT INTO journal (jtitle, pages, year) VALUES (?, ?, ?)", journal_params)
x = 0
while(x < num_authors):
cur.execute("INSERT OR IGNORE INTO authors (name) VALUES (?)", (temp_author_array[x],))
x = x + 1
#display results
print("\nEntry processed: ", count)
print("------------------\nPublication ID: ", temp_ID)
print("Publication Title: ", temp_title)
print("Year: ", temp_year)
print("Journal title: ", temp_booktitle)
print("Pages: ", temp_pages)
i = 0
print("Authors: ")
while(i < num_authors):
print("-",temp_author_array[i])
i = i + 1
print("\nNumber of entries processed: ", count)
SQL queries
%%sql
DROP TABLE IF EXISTS publication;
CREATE TABLE publication(
id INT PRIMARY KEY NOT NULL,
ptitle VARCHAR NOT NULL
);
/* Author Entity set and writes_for relationship */
DROP TABLE IF EXISTS authors;
CREATE TABLE authors(
name VARCHAR(200) PRIMARY KEY NOT NULL,
pub_id INT,
pub_title VARCHAR(200),
FOREIGN KEY(pub_id, pub_title) REFERENCES publication(id, ptitle)
);
/* Journal Entity set and apart_of relationship */
DROP TABLE IF EXISTS journal;
CREATE TABLE journal(
jtitle VARCHAR(200) PRIMARY KEY NOT NULL,
pages INT,
year INT(4),
pub_id INT,
pub_title VARCHAR(200),
FOREIGN KEY(pub_id, pub_title) REFERENCES publication(id, ptitle)
);
/* Wrote relationship b/w journal & authors */
DROP TABLE IF EXISTS wrote;
CREATE TABLE wrote(
name VARCHAR(100) NOT NULL,
jtitle VARCHAR(50) NOT NULL,
PRIMARY KEY(name, jtitle),
FOREIGN KEY(name) REFERENCES authors(name),
FOREIGN KEY(jtitle) REFERENCES journal(jtitle)
);
You need to call con.commit() in order to commit the data to the database. If you use the connection as a context manager (with con:), the connection will commit any changes you make (or roll them back if there is an error).
Explicitly closing the connection is also a good practice.
It looks like you are forgetting to commit and close the connection. You need to call these two functions in order to properly close the connection and to save the work you have done to the database.
conn.commit()
conn.close()
[Using Python3.x]
The basic idea is that I have to run a first query to pull a long list of IDs (text) (about a million IDs) and use those IDs in an IN() clause in a WHERE statement in another query. I'm using python string formatting to make this happen, and works well if the number of IDs is small - say 100k - but gives me an error (pyodbc.Error: ('08S01', '[08S01] [MySQL][ODBC 5.2(a) Driver][mysqld-5.5.31-MariaDB-log]MySQL server has gone away (2006) (SQLExecDirectW)')) when the set is indeed about a million IDs long.
I tried to read into it a bit and think it might have something with the default(?) limits set by SQLite. Also I am wondering if I'm approaching this in the right way anyway.
Here's my code:
Step 1: Getting the IDs
def get_device_ids(con_str, query, tb_name):
local_con = lite.connect('temp.db')
local_cur = local_con.cursor()
local_cur.execute("DROP TABLE IF EXISTS {};".format(tb_name))
local_cur.execute("CREATE TABLE {} (id TEXT PRIMARY KEY, \
lang TEXT, first_date DATETIME);".format(tb_name))
data = create_external_con(con_str, query)
device_id_set = set()
with local_con:
for row in data:
device_id_set.update([row[0]])
local_cur.execute("INSERT INTO srv(id, lang, \
first_date) VALUES (?,?,?);", (row))
lid = local_cur.lastrowid
print("Number of rows inserted into SRV: {}".format(lid))
return device_id_set
Step 2: Generating the query with 'dynamic' IN() clause
def gen_queries(ids):
ids_list = str(', '.join("'" + id_ +"'" for id_ in ids))
query = """
SELECT e.id,
e.field2,
e.field3
FROM table e
WHERE e.id IN ({})
""".format(ids_list)
return query
Step 3: Using that query in another INSERT query
This is where things go wrong
def get_data(con_str, query, tb_name):
local_con = lite.connect('temp.db')
local_cur = local_con.cursor()
local_cur.execute("DROP TABLE IF EXISTS {};".format(tb_name))
local_cur.execute("CREATE TABLE {} (id TEXT, field1 INTEGER, \
field2 TEXT, field3 TEXT, field4 INTEGER, \
PRIMARY KEY(id, field1));".format(tb_name))
data = create_external_con(con_str, query) # <== THIS IS WHERE THAT QUERY IS INSERTED
device_id_set = set()
with local_con:
for row in data:
device_id_set.update(row[1])
local_cur.execute("INSERT INTO table2(id, field1, field2, field3, \
field4) VALUES (?,?,?,?,?);", (row))
lid = local_cur.lastrowid
print("Number of rows inserted into table2: {}".format(lid))
Any help is very much appreciated!
Edit
This is probably the right solution to my problem, however when I try to use "SET SESSION max_allowed_packet=104857600" I get the error: SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value (1621). Then when I try to change SESSION to GLOBAL i get an access denied message.
Insert the IDs into a (temporary) table in the same database, and then use:
... WHERE e.ID IN (SELECT ID FROM TempTable)
I am using sqlalchemy 0.7 and MySQL server version 5.1.63.
I have the following table on my database:
CREATE TABLE `g_domains` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`name` VARCHAR(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `name` (`name`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
The corresponding model is :
class GDomain(Base):
__tablename__ = 'g_domains'
__table_args__ = {
'mysql_engine': 'InnoDB',
'mysql_charset': 'utf8',
'mysql_collate': 'utf8_general_ci'
}
id = Column(mysql.BIGINT(unsigned=True), primary_key=True)
name = Column(mysql.VARCHAR(255, collation='utf8_general_ci'),
nullable=False, unique=True)
The following query in sql alchemy returns no rows :
session.query(GDomain).filter(GDomain.name.in_(domain_set)).
limit(len(domain_set)).all()
where domain_set is a python list containing some domain names like
domain_set = ['www.google.com', 'www.yahoo.com', 'www.AMAZON.com']
Although the table has a row (1, www.amazon.com) the above query returns only
(www.google.com, www.yahoo.com).
When I run the sql query :
SELECT * FROM g_domains
WHERE name IN ('www.google.com', 'www.yahoo.com', 'www.AMAZON.com')
Do you have an idea why this is happening?
Thanks in advance
What is the model_domain variable? Usually it looks like this:
session.query(GDomain).filter(GDomain.name.in_(domain_set)).
limit(len(domain_set)).all()
Note that the GDomain is used in both places. Alternatively you can use aliases:
domains = orm.aliased(GDomain, name='domain')
session.query(domains).filter(domains.name.in_(domain_set))
You can always try debugging, print the query that produced by sqlalchemy (see: SQLAlchemy: print the actual query)