I have some data that is base64 encoded that I want to convert back to binary even if there is a padding error in it. If I use
base64.decodestring(b64_string)
it raises an 'Incorrect padding' error. Is there another way?
UPDATE: Thanks for all the feedback. To be honest, all the methods mentioned sounded a bit hit
and miss so I decided to try openssl. The following command worked a treat:
openssl enc -d -base64 -in b64string -out binary_data
It seems you just need to add padding to your bytes before decoding. There are many other answers on this question, but I want to point out that (at least in Python 3.x) base64.b64decode will truncate any extra padding, provided there is enough in the first place.
So, something like: b'abc=' works just as well as b'abc==' (as does b'abc=====').
What this means is that you can just add the maximum number of padding characters that you would ever need—which is two (b'==')—and base64 will truncate any unnecessary ones.
This lets you write:
base64.b64decode(s + b'==')
which is simpler than:
base64.b64decode(s + b'=' * (-len(s) % 4))
Note that if the string s already has some padding (e.g. b"aGVsbG8="), this approach will only work if the validate keyword argument is set to False (which is the default). If validate is True this will result in a binascii.Error being raised if the total padding is longer than two characters.
From the docs:
If validate is False (the default), characters that are neither in the normal base-64 alphabet nor the alternative alphabet are discarded prior to the padding check. If validate is True, these non-alphabet characters in the input result in a binascii.Error.
However, if validate is False (or left blank to be the default) you can blindly add two padding characters without any problem. Thanks to eel ghEEz for pointing this out in the comments.
As said in other responses, there are various ways in which base64 data could be corrupted.
However, as Wikipedia says, removing the padding (the '=' characters at the end of base64 encoded data) is "lossless":
From a theoretical point of view, the padding character is not needed,
since the number of missing bytes can be calculated from the number
of Base64 digits.
So if this is really the only thing "wrong" with your base64 data, the padding can just be added back. I came up with this to be able to parse "data" URLs in WeasyPrint, some of which were base64 without padding:
import base64
import re
def decode_base64(data, altchars=b'+/'):
"""Decode base64, padding being optional.
:param data: Base64 data as an ASCII byte string
:returns: The decoded byte string.
"""
data = re.sub(rb'[^a-zA-Z0-9%s]+' % altchars, b'', data) # normalize
missing_padding = len(data) % 4
if missing_padding:
data += b'='* (4 - missing_padding)
return base64.b64decode(data, altchars)
Tests for this function: weasyprint/tests/test_css.py#L68
Just add padding as required. Heed Michael's warning, however.
b64_string += "=" * ((4 - len(b64_string) % 4) % 4) #ugh
"Incorrect padding" can mean not only "missing padding" but also (believe it or not) "incorrect padding".
If suggested "adding padding" methods don't work, try removing some trailing bytes:
lens = len(strg)
lenx = lens - (lens % 4 if lens % 4 else 4)
try:
result = base64.decodestring(strg[:lenx])
except etc
Update: Any fiddling around adding padding or removing possibly bad bytes from the end should be done AFTER removing any whitespace, otherwise length calculations will be upset.
It would be a good idea if you showed us a (short) sample of the data that you need to recover. Edit your question and copy/paste the result of print repr(sample).
Update 2: It is possible that the encoding has been done in an url-safe manner. If this is the case, you will be able to see minus and underscore characters in your data, and you should be able to decode it by using base64.b64decode(strg, '-_')
If you can't see minus and underscore characters in your data, but can see plus and slash characters, then you have some other problem, and may need the add-padding or remove-cruft tricks.
If you can see none of minus, underscore, plus and slash in your data, then you need to determine the two alternate characters; they'll be the ones that aren't in [A-Za-z0-9]. Then you'll need to experiment to see which order they need to be used in the 2nd arg of base64.b64decode()
Update 3: If your data is "company confidential":
(a) you should say so up front
(b) we can explore other avenues in understanding the problem, which is highly likely to be related to what characters are used instead of + and / in the encoding alphabet, or by other formatting or extraneous characters.
One such avenue would be to examine what non-"standard" characters are in your data, e.g.
from collections import defaultdict
d = defaultdict(int)
import string
s = set(string.ascii_letters + string.digits)
for c in your_data:
if c not in s:
d[c] += 1
print d
Use
string += '=' * (-len(string) % 4) # restore stripped '='s
Credit goes to a comment somewhere here.
>>> import base64
>>> enc = base64.b64encode('1')
>>> enc
>>> 'MQ=='
>>> base64.b64decode(enc)
>>> '1'
>>> enc = enc.rstrip('=')
>>> enc
>>> 'MQ'
>>> base64.b64decode(enc)
...
TypeError: Incorrect padding
>>> base64.b64decode(enc + '=' * (-len(enc) % 4))
>>> '1'
>>>
If there's a padding error it probably means your string is corrupted; base64-encoded strings should have a multiple of four length. You can try adding the padding character (=) yourself to make the string a multiple of four, but it should already have that unless something is wrong
Incorrect padding error is caused because sometimes, metadata is also present in the encoded string
If your string looks something like: 'data:image/png;base64,...base 64 stuff....'
then you need to remove the first part before decoding it.
Say if you have image base64 encoded string, then try below snippet..
from PIL import Image
from io import BytesIO
from base64 import b64decode
imagestr = 'data:image/png;base64,...base 64 stuff....'
im = Image.open(BytesIO(b64decode(imagestr.split(',')[1])))
im.save("image.png")
You can simply use base64.urlsafe_b64decode(data) if you are trying to decode a web image. It will automatically take care of the padding.
Check the documentation of the data source you're trying to decode. Is it possible that you meant to use base64.urlsafe_b64decode(s) instead of base64.b64decode(s)? That's one reason you might have seen this error message.
Decode string s using a URL-safe alphabet, which substitutes - instead
of + and _ instead of / in the standard Base64 alphabet.
This is for example the case for various Google APIs, like Google's Identity Toolkit and Gmail payloads.
Adding the padding is rather... fiddly. Here's the function I wrote with the help of the comments in this thread as well as the wiki page for base64 (it's surprisingly helpful) https://en.wikipedia.org/wiki/Base64#Padding.
import logging
import base64
def base64_decode(s):
"""Add missing padding to string and return the decoded base64 string."""
log = logging.getLogger()
s = str(s).strip()
try:
return base64.b64decode(s)
except TypeError:
padding = len(s) % 4
if padding == 1:
log.error("Invalid base64 string: {}".format(s))
return ''
elif padding == 2:
s += b'=='
elif padding == 3:
s += b'='
return base64.b64decode(s)
There are two ways to correct the input data described here, or, more specifically and in line with the OP, to make Python module base64's b64decode method able to process the input data to something without raising an un-caught exception:
Append == to the end of the input data and call base64.b64decode(...)
If that raises an exception, then
i. Catch it via try/except,
ii. (R?)Strip any = characters from the input data (N.B. this may not be necessary),
iii. Append A== to the input data (A== through P== will work),
iv. Call base64.b64decode(...) with those A==-appended input data
The result from Item 1. or Item 2. above will yield the desired result.
Caveats
This does not guarantee the decoded result will be what was originally encoded, but it will (sometimes?) give the OP enough to work with:
Even with corruption I want to get back to the binary because I can still get some useful info from the ASN.1 stream").
See What we know and Assumptions below.
TL;DR
From some quick tests of base64.b64decode(...)
it appears that it ignores non-[A-Za-z0-9+/] characters; that includes ignoring =s unless they are the last character(s) in a parsed group of four, in which case the =s terminate the decoding (a=b=c=d= gives the same result as abc=, and a==b==c== gives the same result as ab==).
It also appears that all characters appended are ignored after the point where base64.b64decode(...) terminates decoding e.g. from an = as the fourth in a group.
As noted in several comments above, there are either zero, or one, or two, =s of padding required at the end of input data for when the [number of parsed characters to that point modulo 4] value is 0, or 3, or 2, respectively. So, from items 3. and 4. above, appending two or more =s to the input data will correct any [Incorrect padding] problems in those cases.
HOWEVER, decoding cannot handle the case where the [total number of parsed characters modulo 4] is 1, because it takes a least two encoded characters to represent the first decoded byte in a group of three decoded bytes. In uncorrupted encoded input data, this [N modulo 4]=1 case never happens, but as the OP stated that characters may be missing, it could happen here. That is why simply appending =s will not always work, and why appending A== will work when appending == does not. N.B. Using [A] is all but arbitrary: it adds only cleared (zero) bits to the decoded, which may or not be correct, but then the object here is not correctness but completion by base64.b64decode(...) sans exceptions.
What we know from the OP and especially subsequent comments is
It is suspected that there are missing data (characters) in the
Base64-encoded input data
The Base64 encoding uses the standard 64 place-values plus padding:
A-Z; a-z; 0-9; +; /; = is padding. This is confirmed, or at least
suggested, by the fact that openssl enc ... works.
Assumptions
The input data contain only 7-bit ASCII data
The only kind of corruption is missing encoded input data
The OP does not care about decoded output data at any point after that corresponding to any missing encoded input data
Github
Here is a wrapper to implement this solution:
https://github.com/drbitboy/missing_b64
I got this error without any use of base64. So i got a solution that error is in localhost it works fine on 127.0.0.1
In my case Gmail Web API was returning the email content as a base64 encoded string, but instead of encoded with the standard base64 characters/alphabet, it was encoded with the "web-safe" characters/alphabet variant of base64. The + and / characters are replaced with - and _. For python 3 use base64.urlsafe_b64decode().
This can be done in one line - no need to add temporary variables:
b64decode(f"{s}{'=' * (4 - len(s) % 4)}")
In case this error came from a web server: Try url encoding your post value. I was POSTing via "curl" and discovered I wasn't url-encoding my base64 value so characters like "+" were not escaped so the web server url-decode logic automatically ran url-decode and converted + to spaces.
"+" is a valid base64 character and perhaps the only character which gets mangled by an unexpected url-decode.
You should use
base64.b64decode(b64_string, ' /')
By default, the altchars are '+/'.
I ran into this problem as well and nothing worked.
I finally managed to find the solution which works for me. I had zipped content in base64 and this happened to 1 out of a million records...
This is a version of the solution suggested by Simon Sapin.
In case the padding is missing 3 then I remove the last 3 characters.
Instead of "0gA1RD5L/9AUGtH9MzAwAAA=="
We get "0gA1RD5L/9AUGtH9MzAwAA"
missing_padding = len(data) % 4
if missing_padding == 3:
data = data[0:-3]
elif missing_padding != 0:
print ("Missing padding : " + str(missing_padding))
data += '=' * (4 - missing_padding)
data_decoded = base64.b64decode(data)
According to this answer Trailing As in base64 the reason is nulls. But I still have no idea why the encoder messes this up...
def base64_decode(data: str) -> str:
data = data.encode("ascii")
rem = len(data) % 4
if rem > 0:
data += b"=" * (4 - rem)
return base64.urlsafe_b64decode(data).decode('utf-8')
Simply add additional characters like "=" or any other and make it a multiple of 4 before you try decoding the target string value. Something like;
if len(value) % 4 != 0: #check if multiple of 4
while len(value) % 4 != 0:
value = value + "="
req_str = base64.b64decode(value)
else:
req_str = base64.b64decode(value)
In my case I faced that error while parsing an email. I got the attachment as base64 string and extract it via re.search. Eventually there was a strange additional substring at the end.
dHJhaWxlcgo8PCAvU2l6ZSAxNSAvUm9vdCAxIDAgUiAvSW5mbyAyIDAgUgovSUQgWyhcMDAyXDMz
MHtPcFwyNTZbezU/VzheXDM0MXFcMzExKShcMDAyXDMzMHtPcFwyNTZbezU/VzheXDM0MXFcMzEx
KV0KPj4Kc3RhcnR4cmVmCjY3MDEKJSVFT0YK
--_=ic0008m4wtZ4TqBFd+sXC8--
When I deleted --_=ic0008m4wtZ4TqBFd+sXC8-- and strip the string then parsing was fixed up.
So my advise is make sure that you are decoding a correct base64 string.
Clear your browser cookie and recheck again, it should work.
In my case I faced this error, after deleting the venv for the perticular project and it showing error for each fields so I tried by changing the BROWSER(Chrome to Edge), And actually it worked..
Related
Am communicating with a piece of equipment over RS232, and it seems to only interpret commands correctly when issued commands in the following format:
b'\xXX'
for example:
equipment_ser.write(b'\xE1')
The argument is variable, and so I convert to hex before formatting the command. I'm having trouble coming up with a consistent way to ensure only 1 backslash while preserving the hex command. I need the entire range - \x00 to \xFF.
One approach was to use 'unicode escape':
setpoint_command_INT = 1
setpoint_command_HEX = "{0:#0{1}x}".format(setpoint_command_INT,4)
setpoint_command_HEX_partially_formatted = r'\x' + setpoint_command_HEX[2:4]
setpoint_command_HEX_fully_formatted = setpoint_command_HEX_partially_formatted.encode('utf_8').decode('unicode_escape')
works ok for the above example:
Out[324]: '\x01'
but not for large numbers where the code process changes it:
setpoint_command_INT = 240
Out[332]: 'ð'
How can I format this command so that I have the single backslash while preserving the ability to command across the full range 0-255?
Thanks
Edit:
The correct way to do this is as said by Michael below:
bytes((240,))
Thank you for the prompt responses.
In your code, you are sending a single byte
equipment_ser.write(b'\xE1')
In other words, you're sending decimal 225 but as a single byte.
For any integer value in the range 0-255 you can create its byte equivalent by:
import sys
N = 225 # for example
b = N.to_bytes(1, sys.byteorder)
equipment_ser.write(b)
Very simple, I know, but the docs aren't too helpful. I'm trying to hash a simple string. I was following this guide. The example given therein is:
import hashlib
hash_object = hashlib.md5(b'Hello World')
print(hash_object.hexdigest())
And then you have a hash representation. Suppose I want to take this one step further. I have four strings I want to concatenate together, the result of which needs to be converted to byte sequence, in order to be passed to the hashlib.md5() function. However, I'm curious how I can replicate the b'Hello World' syntax using a variable instead of a hard-coded string. Docs seem to suggest you can pass in a format to the built-in format function, so for my use-case something like:
my_string = '%s%s%s%s' % (first, second, third, fourth)
byte_string = format(my_string, 'b')
This doesn't quite work, though. How do I do this?
Strings in Python are a sequence of characters, to convert a string to a sequence of bytes you encode it using some character set. For example:
my_string = '%s%s%s%s' % (first, second, third, fourth)
byte_string = my_string.encode('utf-8')
Instead of my_string.encode('utf-8') you could also use bytes(my_string, 'utf-8'), these are equivalent. You can also use a different encoding if you like, but UTF-8 is generally a good choice because it is capable of representing any code point (character) and it is fairly compact, especially for ASCII data.
my_string = '%s%s%s' % (first, second, third, fourth)
byte_string = bytes(my_string)
I'm trying to do a comparison of some byte values - source A comes from a file that is being 'read':
f = open(fname, "rb")
f_data = f.read()
f.close()
These files can be anything from a few Kb to a few Mb large
Source B is a dictionary of known patterns:
eof_markers = {
'jpg':b'\xff\xd9',
'pdf':b'\x25\x25\x45\x4f\x46',
}
(This list will be extended once the basic process works)
Essentially I'm trying to 'read' the file (source A) and then incrementally inspect the last byte for matches to the pattern list testString = f_data[-counter:] If no match is found, it should increase counter by 1, and try to pattern match against the list again.
I've tried a number of different ways to get this working, I can get the testString to increment correctly, but I keep running into encode issue where various approaches are want to ASCIIify the byte to undertake the comparison.
I'm a bit lost, and not for the first time wandering around the code changing int to u to b and not getting past issues like d9 being a reserved value, and therefore not being able to use the ASCII type comparison tools e.g. if format_type in testString: (results in a UnicodeDecodeError: 'ascii' codec can't decode byte a9
I tried to convert everything to an integer, but that was throwing this error: ValueError: invalid literal for int() with base 2: '.' or ValueError: invalid literal for int() with base 10: '.' I tried to convert the testString to hex bytes, but kept getting TypeError: hex() argument can't be converted to hex (this is more my lack of understanding than anything else I'm sure!....)
There are a number of resources I've found that talk about encoding / hex comparisons e.g. stackoverflow.com/questions/10561923/unicodedecodeerror-ascii-codec-cant-decode-byte-0xef-in-position-1), I've just not found something that I can either fully understand, or that points me down the right path.
Its been a while I've been stuck on this, so any pointers are gratefully received.
I'm not sure exactly what you're trying to do, but I ran this code in Python 3.2.3.
#f = open(fname, "rb")
#f_data = f.read()
#f.close()
f_data = b'\x12\x43\xff\xd9\x00\x23'
eof_markers = {
'jpg':b'\xff\xd9',
'pdf':b'\x25\x25\x45\x4f\x46',
}
for counter in range(-4, 0):
for name, marker in eof_markers.items():
print(counter, ('' if marker in f_data[counter:] else '!') + name)
I'm using a hardcoded f_data, but you can undo that by just uncommenting lines 1-3 and comment line 4.
Here's the output:
-4 !pdf
-4 jpg
-3 !pdf
-3 !jpg
-2 !pdf
-2 !jpg
-1 !pdf
-1 !jpg
Is there something this isn't doing that you need to do?
I can't figure out how to comment on your main post instead of making a subpost. Anyway,
I have answers to some of your questions..
int(v) converts a formatted number (eg '599') to an integer, not a character(eg "!") to its integer value. You would want ord() for that.
However I see no reason you would need to use either in this situation.
Hex != binary. Hex is just a numeric base. Binary is raw byte values that may not be printable depending on their value. This is why they show up as escape codes like "\xfd". That's how Python represents unprintable characters to you -- as hex codes.However they are still single characters with no special status -- they don't need conversion. It's perfectly valid to compare 'A' with '\xfd'. Hence, you should be able to do the comparison without any conversion at all.
changing 'u' to 'b' will only have any real effect if you're running Python 3.x
As for directly solving the problem, I feel that while it's clear what you want to do, it's not clear why you have chosen to do things in this way. To get a better answer, you will need to ask a clearer question.
Here's an example of an alternative approach:
# convert eof markers to a list of characters
eof_markers = {k: list(v) for k,v in eof_markers.items()}
# assuming that the bytes you have read in are being added to a list,
# we can then do a check for the entire EOF string by:
# outer loop reading the next byte, etc, omitted.
for mname, marker in eof_markers.items():
nmarkerbytes = len(marker)
enoughbytes = len(bytes_buffer) >= nmarkerbytes
if enoughbytes and bytes_buffer[-nmarkerbytes:] == marker:
location = f.tell()
print ('%s marker found at %d' % (mname, location))
There are other, faster approaches using bytes or bytearray (for example, using the 'rfind' method), but this is the simplest approach to explain.
As I understand it, files like /dev/urandom provide just a constant stream of bits. The terminal emulator then tries to interpret them as strings, which results in a mess of unrecognised characters.
How would I go about doing the same thing in python, send a string of ones and zeros to the terminal as "raw bits"?
edit
I may have to clarify:
Say for example the string I want to "print" is 1011100. On an ascii system, the output should be "\". If I cat /dev/urandom, it provides a constant stream of bits. Which get printed like this: "���c�g/�t]+__��-�;". That's what I want.
Stephano: the key is the incomplete answer by "#you" above - the chr function :
import random, sys
for i in xrange(500):
sys.stdout.write(chr(random.randrange(256)))
Use the chr function. I takes an input between 0 and 255 and returns a string containing the character corresponding to that value.
And from another question on StackOverflow you can get a _bin function.
def _bin(x, width):
return ''.join(str((x>>i)&1) for i in xrange(width-1,-1,-1))
Then simply put call _bin(ord(x), 8) where x is a character (string of length one)
import sys, random
while True:
sys.stdout.write(chr(random.getrandbits(8)))
sys.stdout.flush()
I am expecting users to upload a CSV file of max size 1MB to a web form that should fit a given format similar to:
"<String>","<String>",<Int>,<Float>
That will be processed later. I would like to verify the file fits a specified format so that the program that shall later use the file doesnt receive unexpected input and that there are no security concerns (say some injection attack against the parsing script that does some calculations and db insert).
(1) What would be the best way to go about doing this that would be fast and thorough? From what I've researched I could go the path of regex or something more like this. I've looked at the python csv module but that doesnt appear to have any built in verification.
(2) Assuming I go for a regex, can anyone direct me to towards the best way to do this? Do I match for illegal characters and reject on that? (eg. no '/' '\' '<' '>' '{' '}' etc.) or match on all legal eg. [a-zA-Z0-9]{1,10} for the string component? I'm not too familiar with regular expressions so pointers or examples would be appreciated.
EDIT:
Strings should contain no commas or quotes it would just contain a name (ie. first name, last name). And yes I forgot to add they would be double quoted.
EDIT #2:
Thanks for all the answers. Cutplace is quite interesting but is a standalone. Decided to go with pyparsing in the end because it gives more flexibility should I add more formats.
Pyparsing will process this data, and will be tolerant of unexpected things like spaces before and after commas, commas within quotes, etc. (csv module is too, but regex solutions force you to add "\s*" bits all over the place).
from pyparsing import *
integer = Regex(r"-?\d+").setName("integer")
integer.setParseAction(lambda tokens: int(tokens[0]))
floatnum = Regex(r"-?\d+\.\d*").setName("float")
floatnum.setParseAction(lambda tokens: float(tokens[0]))
dblQuotedString.setParseAction(removeQuotes)
COMMA = Suppress(',')
validLine = dblQuotedString + COMMA + dblQuotedString + COMMA + \
integer + COMMA + floatnum + LineEnd()
tests = """\
"good data","good2",100,3.14
"good data" , "good2", 100, 3.14
bad, "good","good2",100,3.14
"bad","good2",100,3
"bad","good2",100.5,3
""".splitlines()
for t in tests:
print t
try:
print validLine.parseString(t).asList()
except ParseException, pe:
print pe.markInputline('?')
print pe.msg
print
Prints
"good data","good2",100,3.14
['good data', 'good2', 100, 3.1400000000000001]
"good data" , "good2", 100, 3.14
['good data', 'good2', 100, 3.1400000000000001]
bad, "good","good2",100,3.14
?bad, "good","good2",100,3.14
Expected string enclosed in double quotes
"bad","good2",100,3
"bad","good2",100,?3
Expected float
"bad","good2",100.5,3
"bad","good2",100?.5,3
Expected ","
You will probably be stripping those quotation marks off at some future time, pyparsing can do that at parse time by adding:
dblQuotedString.setParseAction(removeQuotes)
If you want to add comment support to your input file, say a '#' followed by the rest of the line, you can do this:
comment = '#' + restOfline
validLine.ignore(comment)
You can also add names to these fields, so that you can access them by name instead of index position (which I find gives more robust code in light of changes down the road):
validLine = dblQuotedString("key") + COMMA + dblQuotedString("title") + COMMA + \
integer("qty") + COMMA + floatnum("price") + LineEnd()
And your post-processing code can then do this:
data = validLine.parseString(t)
print "%(key)s: %(title)s, %(qty)d in stock at $%(price).2f" % data
print data.qty*data.price
I'd vote for parsing the file, checking you've got 4 components per record, that the first two components are strings, the third is an int (checking for NaN conditions), and the fourth is a float (also checking for NaN conditions).
Python would be an excellent tool for the job.
I'm not aware of any libraries in Python to deal with validation of CSV files against a spec, but it really shouldn't be too hard to write.
import csv
import math
dataChecker = csv.reader(open('data.csv'))
for row in dataChecker:
if len(row) != 4:
print 'Invalid row length.'
return
my_int = int(row[2])
my_float = float(row[3])
if math.isnan(my_int):
print 'Bad int found'
return
if math.isnan(my_float):
print 'Bad float found'
return
print 'All good!'
Here's a small snippet I made:
import csv
f = csv.reader(open("test.csv"))
for value in f:
value[0] = str(value[0])
value[1] = str(value[1])
value[2] = int(value[2])
value[3] = float(value[3])
If you run that with a file that doesn't have the format your specified, you'll get an exception:
$ python valid.py
Traceback (most recent call last):
File "valid.py", line 8, in <module>
i[2] = int(i[2])
ValueError: invalid literal for int() with base 10: 'a3'
You can then make a try-except ValueError to catch it and let the users know what they did wrong.
There can be a lot of corner-cases for parsing CSV, so you probably don't want to try doing it "by hand". At least start with a package/library built-in to the language that you're using, even if it doesn't do all the "verification" you can think of.
Once you get there, then examine the fields for your list of "illegal" chars, or examine the values in each field to determine they're valid (if you can do so). You also don't even need a regex for this task necessarily, but it may be more concise to do it that way.
You might also disallow embedded \r or \n, \0 or \t. Just loop through the fields and check them after you've loaded the data with your csv lib.
Try Cutplace. It verifies that tabluar data conforms to an interface control document.
Ideally, you want your filtering to be as restrictive as possible - the fewer things you allow, the fewer potential avenues of attack. For instance, a float or int field has a very small number of characters (and very few configurations of those characters) which should actually be allowed. String filtering should ideally be restricted to only what characters people would have a reason to input - without knowing the larger context it's hard to tell you exactly which you should allow, but at a bare minimum the string match regex should require quoting of strings and disallow anything that would terminate the string early.
Keep in mind, however, that some names may contain things like single quotes ("O'Neil", for instance) or dashes, so you couldn't necessarily rule those out.
Something like...
/"[a-zA-Z' -]+"/
...would probably be ideal for double-quoted strings which are supposed to contain names. You could replace the + with a {x,y} length min/max if you wanted to enforce certain lengths as well.