We are trying to change the values in a column from a shapefile to create a new field/column with the new values. We've tried using a search and update cursor to accomplish this as well as a for loop and field calculator. Does anyone have an idea for how to go about accomplishing this in a simplistic way?
Here's a sample of our script in which we try to convert all of the items labeled "Montane_Chaparal" in the column to a new value "Dry_Forest" in the new column
veg = "Cal_Veg.shp"
field = ["WHRNAME"]
therows=arcpy.UpdateCursor(veg, field)
for therow in therows:
if (therow.getValue(field)==" "):
therow.setValue("Montane_Chaparral","Dry_Forest")
therows.updateRow(therow)
print "Done."
I have also tried:
veg = "Cal_Veg.shp"
with arcpy.da.SearchCursor(veg,["WHRNAME","NewVeg"]) as SCursor:
for row in SCursor:
if row[0] == "Montane Chaparral":
with arcpy.da.InsertCursor(veg,["NewVeg"]) as ICursor:
for new_row in ICursor:
NewVeg = "Dry Forest"
ICursor.insertRow(new_row)
print "done"
I would definitely recommend the more modern da Update Cursor for your script, for a variety of reasons, but mainly for speed and efficiency.
It appears as if you are incorrectly running a nested cursor where all you need is a single Update Cursor. Try this.
veg = "Cal_Veg.shp"
with arcpy.da.UpdateCursor(veg,["WHRNAME","NewVeg"]) as cursor:
for row in cursor:
if row[0] == "Montane Chaparral": # row[0] references "WHRNAME"
row[1] = "Dry Forest" # row[1] references "NewVeg"
cursor.updateRow(row)
print "done"
Related
For my project, I needed to run a raw query to increase performance. The problem is here that for one part I need to change value of fetched data to sth else(gregorian date to jalali) and this causes my performance to decrease a lot.
cursor.execute("select date,number from my_db)
while True:
row = cursor.fetchone()
if row is None:
break
data.append(row)
This section longs about 1 min for 4 million data, but I need to chanfe date like this:
cursor.execute("select date,number from my_db)
while True:
row = cursor.fetchone()
if row is None:
break
row = list(row)
row[0] = (jdatetime.datetime.fromgregorian(datetime=row[0]).strftime( '%y/%m/%d, %H:%m'))
data.append(row)
this causes my code to run in 7 min. I wonder if there is a way to do this change efficiently
First part should not take 1 min to finish. Instead of raw sql and a loop, do
data = YourModel.objects.values_list('date', 'number')
For the second part, you can make your own SQL function.
https://raresql.com/tag/iranian-calendar-to-georgian-calendar-in-sql-server/
I'm trying to save a column value into a python variable; I query my DB for a int in a columns called id_mType and id_meter depending of a value that I get from a XML. To test it I do the next (I'm new using databases):
m = 'R1'
id_cont1 = 'LGZ0019800712'
xdb = cursor.execute("SELECT id_mType FROM mType WHERE m_symbol = %s", m)
xdb1 = cursor.execute("select id_meter from meter where nombre = %s",
id_cont1)
print (xdb)
print (xdb1)
I get every time the value "1" where the id_mType for 'R1' = 3 and id_meter= 7 for id_cont1 value. I need this to insert in another table (where there are both FK: id_meter and id_mType. Dont know if there is an easiest way)
You can store it in a list. Is that okay?
results=cursor.fetchall()
my_list=[]
for result in results:
my_list.append(result[0])
Now my_list should hold the SQL column you get returned with your query.
Use the fetchone() method to fetch a row from a cursor.
row = xdb.fetchone()
if row:
mtype = row[0]
row = xdb1.fetchone()
if row:
meter = row[0]
I have a table that each different values in the table is the result of a combination of different conditions.For example in the below image the condition is as follow: if coverType=fallow , Treatment=Crop Residue Cover, Condition/ImperviousArea=Good, SoilType=C then the value is equal to 83.I want to have a tool which asks the user to choose a value from each column (e.g. choose CoverType; SoilType, ...) and then return the related number as output. Do you have any thoughts how should I do this?
So far I have just the first lines of the code as below:
import arcpy
path= r'H\Python\FinalProject\RunOff.gdb'
arcpy.env.workspace= path
arcpy.env.overwriteOutput = True
table=r'H\Python\FinalProject\RunOff.gdb\CN.csv'
cursor= arcpy.SearchCursor(table)
Unless you are restricted to ArcGIS 10.0 or earlier, I suggest using the da cursors. The syntax is a little easier.
However, you're definitely on the right track. The cursor lets you loop through the table and find a row in which the three categories match the user's input.
cover_type = #userinput
treatment = #userinput
condition = #userinput
field_list = ["CoverType", "Treatment", "Condition", "B"]
with arcpy.da.SearchCursor(table, field_list) as cursor:
for row in cursor:
if row[0] == cover_type and row[1] == treatment and row[2] == condition:
print row[3]
else:
print "no matching combination"
i want to use python to populate a database. I have to get values out of a table, calculate a score and fill this score into anoter table. I cant figure out how i can compare the column names of a row from a resultset. Because i dont need all of the columns to calculate, but need a few others as id and type_name for none calculation. So basically i want to do this:
cur = connection.cursor()
QUERY ="SELECT * FROM table"
cur.execute(QUERY)
rs = cur.fetchall()
for row in rs:
for col in row:
// if(col = "X" or col = "Y" or col = "Z"):
calc ...
// else:
use id, type_name whatever ...
how can achieve something like this? Else the code would just blow up like a bomb.
Maybe someone is searching for the answer too. With help of the previous comment, i could solve it like that
field_names = cur.description
for row in rs:
for index, col in enumerate(row):
name = field_names[index][0]
if(name == "..."):
...
elif(name == "..."):
...
I have a csv file of customer ids (CRM_id). I need to get their primary keys (an autoincrement int) from the customers table of the database. (I can't be assured of the integrity of the CRM_ids so I chose not to make that the primary key).
So:
customers = []
with open("CRM_ids.csv", 'r', newline='') as csvfile:
customerfile = csv.DictReader(csvfile, delimiter = ',', quotechar='"', skipinitialspace=True)
#only one "CRM_id" field per row
customers = [c for c in customerfile]
So far so good? I think this is the most pythonesque way of doing that (but happy to hear otherwise).
Now comes the ugly code. It works, but I hate appending to the list because that has to copy and reallocate memory for each loop, right? Is there a better way (pre-allocate + enumerate to keep track of the index comes to mind, but maybe there's an even quickler/better way by being clever with the SQL so as not to do several thousand separate queries...)?
cnx = mysql.connector.connect(user='me', password=sys.argv[1], host="localhost", database="mydb")
cursor = cnx.cursor()
select_customer = ("SELECT id FROM customers WHERE CRM_id = %(CRM_id)s LIMIT 1;")
c_ids = []
for row in customers:
cursor.execute(select_customer, row)
#note fetchone() returns a tuple, but the SELECTed set
#only has a single column so we need to get this column with the [0]
c_ids.extend(cursor.fetchall())
c_ids = [c[0] for c in c_ids]
Edit:
Purpose is to get the primary keys in a list so I can use these to allocate some other data from other CSV files in linked tables (the customer id primary key is a foreign key to these other tables, and the allocation algorithm changes, so it's best to have the flexibility to do the allocation in python rather than hard coding SQL queries). I know this sounds a little backwards, but the "client" only works with spreadsheets rather than an ERP/PLM, so I have to build the "relations" for this small app myself.
What about changing your query to get what you want?
crm_ids = ",".join(customers)
select_customer = "SELECT UNIQUE id FROM customers WHERE CRM_id IN (%s);" % crm_ids
MySQL should be fine with even a multi-megabyte query, according to the manual; if it gets to be a really long list, you can always break it up - two or three queries is guaranteed much faster than a few thousand.
how about storing your csv in a dict instead of a list:
customers = [c for c in customerfile]
becomes:
customers = {c['CRM_id']:c for c in customerfile}
then select the entire xref:
result = cursor.execute('select id, CRM_id from customers')
and add the new rowid as a new entry in the dict:
for row in result:
customers[row[1]]['newid']=row[0]