So i have a simple form that takes a few inputs (two text and two textareas) and runs it through a function that puts all four inputs into the datastore (google app engine). The problem is when I have a decent amount of text in one of the s (meaning, 5 paragraphs, ~4/5 sentences each, ..2,000 characters).
I am using TextProperty() s in the datastore, (and also StringProperty for the smaller inputs). It works when I only put in like a few words for each, but not when I put in a decent amount of text, what happens: a blank webpage comes up instead of my basic confirmation page. No data is transferred into the datastore.
My handler uses get() (as opposed to POST)
Why is this happening and how do I fix it?
I'm sure this is a simple fix, but I am somewhat green to this. Thanks
While in theory there is no limit, in practice all the browsers apply some limits to the query string and since you are using GET instead of POST all your inputs are passed as query parameters in the URL.
When you are getting values from input forms, you should use the proper method="POST" in the <form> and handle that correctly in your handler using post(). If you go through the Getting Started you will find out the section for Handling Forms.
Related
I packed some defs in a class to do a work of translation text into the other language using Bing or Google translation API. and using a Flask to render templates as you may aware of.
However, when do the translation, I have some status and progress information, like completed 10% and etc during translate the paragraphs. but that information generated in the translation class as you may imagine - the class does the translation job.
However, in my Flask app, after call the class do the translation I want to have an ajax call from the webpage to the Flask App to retrieve that 10% information which generated from that class.
Here is what I did:
If I don't use any class, put all the defs within the main Flask App file, I can use the a global variable store the 10% information, but that makes the codes complex and I want to pack all the associated defs in a class.
in the Flask App, I was trying to use session['translation_pos'] to retrieve the information that I stored in session['translation_pos'] in the class, but it seems not works.
I use python 3 and Flask, I don't know how to get this progress percentage information from the class - where the data generated - to the App.
May be one of the way would like store the number in a text file or some places, and read the file in the App, but I was thinking that certainly shouldn't the way to handle this problems.
Would anyone could advise with some idea that will be much appreciated.
Thank you All.
You may want to look at a different approach to running the task, using something like Celery or Redis Queue - covered very well here in the mega tutorial.
By using one of these, you can run the task and query the runner periodically for progress to report it back to the user.
If it were me, for the data processing I would store this in a database. When the task is completed, it gets re-queried and passed through to the UI as a template variable (or streamed from an ajax function if it's a large data set).
I've recently started working with Django. I'm working on an existing Django/Python based site. In particular I'm implementing some functionality to create and display a PDF document when a particular URL is hit. I have an entry in the app's urls file that routes to a function in the views file and the PDF generation is working fine.
However, the view function is pretty big and I want to extract the code out somewhere to keep my view as thin as possible, but I'm not sure of the best/correct approach. I'll probably need to generate other PDFs in due course so would it make sense to create a 'pdfs' app and put code in there? If so, should it go in a model or view?
In a PHP/CodeIgniter environment for example I would put the code into a model, but models seem to be closely linked to database tables in Django and I don't need any db functionality for this.
Any pointers/advice from more experienced Django users would be appreciated.
Thanks
If you plan to scale your project, I would suggest moving it to a separate app. Generally speaking, generating PDFs based on an url hit directly is not the best thing to do performance-wise. Generating a PDF file is pretty heavy on you server, so if multiple people do it at the same time, the performance of your system will suffer.
As a first step, just put it in a separate class, and execute that code from the view. At some point you will probably want to do some permission checks etc - that stays in the view, while generation of the PDF itself will be cleanly separated.
Once you test your code, scale etc - then you can substitute that one line call in the view into putting the PDF generation in a queue and only pulling it once it's done - that will allow you to manage your computing powers better.
Yes you can in principle do it in an app (the concept of reusable apps is the basis for their existence)
However not many people do it/not many applications require it. It depends on how/if the functionality will be shared. In other words there must be a real benefit.
The code normally goes in both the view/s and in the models (to isolate code and for the model managers)
I am working on an internal database that a handful of people will be using to access. In short there will be a non-authenticated search page that will allow them to do partial searched on various fields.
I am trying to figure out the best way to display upwards of 1000 individual components. Ideally it would show a list of N components on a page. But since I do not know the size of the query until they press search I don't know how to split it up.
I am fearful this is a case of I don't what I don't know.
I think you want Django's pagination. That should allow you to query a database, and split the resulting objects across pages. It even has an example.
Data:
['<p>Work! please work.img:0\xc3\x82\xc2\xa0Will you?img:1</p>img:2img:3\xc3\x82\xc2\xa0ascasdacasdadasdaca HAHAHAHAHA! BAND!\n', '\n', "<p>Random test.</p><p><br />If you want to start a flame war, mention lines of code per day or hour in a developer\xc3\xa2€™s public forum. At least that is what I found when I started investigating how many lines of code are written per day per programmer. Lines of code, or loc for short, are supposedly a terrible metric for measuring programmer productivity and empirically I agree with this. There are too many variables involved starting with the definition of a line of code and going all the way up to the complexity of the requirements. There are single lines that take a long time to get right and there many lines which are mindless boilerplate code. All the same this measurement does have information encoded in it; the hard part is extracting that information and drawing the correct conclusions. Unfortunately I don\xc3\xa2€™t have access to enough data about software projects to provide a statistically sound analysis but I got a very interesting result from measuring two very different projects that I would like to share.</p><p>The first project is a traditional client server data mining tool for a vertical market mostly built in VB.NET and WinForms. This project started in 2003 and has been through several releases and an upgrade from .NET 1.1 to .NET 2.0. It has server components but most of the half a million lines of code lives in the client side. The team has always had around four developers although not always the same people. The average lines of code for this project came in at around ninety lines of code per day per developer. I wasn\xc3\xa2€™t able to measure the SQL in the stored procedures so this number is slightly inflated.</p><p><em>The second project is much smaller adding up to ten thousand lines of C# plus seven thousand lines of XAML c</em>reated by a team of four that also worked on the first project. This project lasted three months and it is a WPF point of sale application thus very different in scope from the first project. <strong>It was built around a number of web services in SOA fashion and does not have a database per se. Its average came up around seventy lines of code per developer per day.</strong></p><p>I am very surprised with the closeness of these numbers, especially given the difference in size and scope of the products. The commonality between them are the .NET framework and the team and one of them may be the key. Of these two, I am leaning to the .NET framework being the unifier because although the developers worked on both projects, three of elements on the team of the second project have spent less than a year on the first project and did not belong to the core team that wrote the vast majority of that first product. Or maybe there is something more general at work here?</p><p>The first step in using the WP_Filesystem is requesting credentials from the user. The normal way this is accomplished is at the time when you're saving the results of a form input, or you have otherwise determined that you need to write to a file.</p><p>The credentials form can be displayed onto an admin page by using the following code:</p><pre>$url = wp_nonce_url('themes.php?page=example','example-theme-options');\n</pre>", "if (false === ($creds = request_filesystem_credentials($url, '', false, false, null) ) ) {\n", '\treturn; // stop processing here\n', '}\n', '<p>The request_filesystem_credentials() call takes five arguments.</p><ul><li>The URL to which the form should be submitted (a nonced URL to a theme page was used in the example above)</li><li>A method override (normally you should leave this as the empty string: "")</li><li>An error flag (normally false unless an error is detected, see below)</li><li>A context directory (false, or a specific directory path that you want to test for access)</li><li>Form fields (an array of form field names from your previous form that you wish to "pass-through" the resulting credentials form, or null if there are none)</li></ul><p>The request_filesystem_credentials call will test to see if it is capable of writing to the local filesystem directly without credentials first. If this is the case, then it will return true and not do anything. Your code can then proceed to use the WP_Filesystem class.</p><p>The request_filesystem_credentials call also takes into account hardcoded information, such as hostname or username or password, which has been inserted into the wp-config.php file using defines. If these are pre-defined in that file, then this call will return that information instead of displaying a form, bypassing the form for the user.</p><p>If it does need credentials from the user, then it will output the FTP information form and return false. In this case, you should stop processing further, in order to allow the user to input credentials. Any form fields names you specified will be included in the resulting form as hidden inputs, and will be returned when the user resubmits the form, this time with FTP credentials.</p><p>Note: Do not use the reserved names of hostname, username, password, public_key, or private_key for your own inputs. These are used by the credentials form itself. Alternatively, if you do use them, the request_filesystem_credentials function will assume that they are the incoming FTP credentials.</p><p>When the credentials form is submitted, it will look in the incoming POST data for these fields, and if found, it will return them in an array suitable for passing to WP_Filesystem, which is the next step.</p><p><a id="Initializing_WP_Filesystem_Base" name="Initializing_WP_Filesystem_Base"></a>']
I use ReportLab to convert it to pdf but it fails.
This is my ReportLab code:
for page in self.pagelist:
self.image_parser(page)
print page.content
for i in range(0,len(page.content)):
bogustext = page.content[i]
while (len(re.findall(r'img:?',bogustext)) > 0):
for m in re.finditer( r'img:?', bogustext ):
image_tag = bogustext[m.start():m.end()+1]
print (image_tag.split(':')[1])
im = Image(page.images[int(image_tag.split(':')[1])],width=2*inch, height=2*inch)
Story.append(Paragraph(bogustext[0:m.start()], style))
bogustext = bogustext.replace(bogustext[0:m.start()],'')
Story.append(im)
bogustext = bogustext.replace(image_tag,'')
break
p = Paragraph(bogustext,style)
Story.append(p)
Story.append(Spacer(1,0.2*inch))
page is class of which page.content contains the Data I mentioned above.
self.image(page) is a function that removes all the image urls in the page.content(Data).
Error:
xml parser error (invalid attribute name id) in paragraph beginning
'<p>The request_filesystem_cred'
I don't get this error if I produce a PDF for every element of the list but I do get one if I try to make a complete PDF out of it. Where am I going wrong?
I have a file that contains ~16,000 lines of information on entities. The user is supposed to upload the file using an HTML upload form, then the system handles this by reading line by line and creating then put()'ing entities onto the datastore.
I'm limited by the 30 second request time limit. I have tried a lot of different work-arounds using Task Queue, forced HTML redirecting, etc. and nothing has worked for me.
I am using forced HTML redirecting to delete all data and this works, albeit VERY slowly. (4th answer here: Delete all data for a kind in Google App Engine)
I can't seem to apply this to my uploading problem, since my method has to be a POST method. Is there a solution somehow? Sample code would be much appreciated since I'm very new to web development in general.
To solve a similar problem, I stored the dataset in a model with a single TextProperty, then spawn a taskqueue task that:
Fetches a dataset from the datastore if there are any left.
Checks if the length of the dataset is <= N, where N is some small number of entities you can put() without a timeout. I used 5. If so, write the individual entities, delete the dataset record, and spawn a new copy of the task.
If the dataset size was bigger than N, split it into N parts in the same format and write those to the datastore, delete the original entity, and spawn a new copy of the task.
If you're doing this to bulk load data, why not use the bulk loader?
If you need the interface to be accessible to non-admin users, then, as suggested, you need to break the file up into decent sized chunks (by taking blocks of n lines each) put them into the datastore, and start a task to deal with each of them.