Error while loading two dot files in python - python

How can I load two separate dot.files in python.
I was trying like this but I got an error.
#import pygraphviz as pgv
import networkx as nx
#import matplotlib.pyplot as plt
Gtmp1 = nx.read_dot("case1.dot")
Gtmp2 = nx.read_dot("case2.dot")
Error:
Warning: syntax error in line 2 near '"'
Traceback (most recent call last):
File "/home/thinkpad/anaconda3/lib/python3.4/site-packages/pygraphviz/agraph.py", line 1201, in read
self.handle = gv.agread(fh, None)
ValueError: agread: bad input data
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Graphs.py", line 196, in <module>
Gtmp2 = nx.read_dot("case2.dot")
File "/home/thinkpad/anaconda3/lib/python3.4/site-packages/networkx/drawing/nx_agraph.py", line 197, in read_dot
A=pygraphviz.AGraph(file=path)
File "/home/thinkpad/anaconda3/lib/python3.4/site-packages/pygraphviz/agraph.py", line 162, in __init__
self.read(filename)
File "/home/thinkpad/anaconda3/lib/python3.4/site-packages/pygraphviz /agraph.py", line 1203, in read
raise DotError
pygraphviz.agraph.DotError

Related

Nmap scan a network range with subnet raises PortScannerError

I'm trying to scan a nmap with the 192.168.10.1/24 network, using Python 3.9.2, Nmap 7.80 and python-nmap 0.7.1
Code:
import nmap
nm = nmap.PortScanner()
result = nm.scan(hosts = '192.168.10.1/24')
print(result)
Error:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/nmap/nmap.py", line 369, in analyse_nmap_xml_scan
dom = ET.fromstring(self._nmap_last_output)
File "/usr/lib/python3.9/xml/etree/ElementTree.py", line 1348, in XML
return parser.close()
xml.etree.ElementTree.ParseError: no element found: line 7, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/fypgroup02/Documents/test.py", line 4, in <module>
result = nm.scan(hosts = '192.168.10.1/24')
File "/usr/local/lib/python3.9/dist-packages/nmap/nmap.py", line 306, in scan
return self.analyse_nmap_xml_scan(
File "/usr/local/lib/python3.9/dist-packages/nmap/nmap.py", line 372, in analyse_nmap_xml_scan
raise PortScannerError(nmap_err)
nmap.nmap.PortScannerError: "nmap: Target.cc:503: void Target::stopTimeOutClock(const timeval*): Assertion `htn.toclock_running == true' failed.\n"
I have tried to scan 192.168.10.1-10, it works, but why is 192.168.10.1/24 not working?

fetch_lfw_people error on changing parameters (Python)

I'm following this example on the sklean doc sklearn_doc ,I can execute the example but if I change from:
fetch_lfw_people(min_faces_per_person=70, resize=0.4)
to
fetch_lfw_people(min_faces_per_person=200, resize=1.0)
I'haven't changed anything but this parameters ,is an int for another int and a float for another float,I can't see the problem
I get the following error:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/datasets/lfw.py", line 144, in _load_imgs
from scipy.misc import imread
ImportError: cannot import name 'imread'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.6/site-packages/scipy/misc/pilutil.py", line 19, in <module>
from PIL import Image, ImageFilter
ModuleNotFoundError: No module named 'PIL'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/datasets/lfw.py", line 146, in _load_imgs
from scipy.misc.pilutil import imread
File "/home/user/.local/lib/python3.6/site-packages/scipy/misc/pilutil.py", line 21, in <module>
import Image
ModuleNotFoundError: No module named 'Image'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/PycharmProjects/practica/ejercicio1.py", line 25, in <module>
lfw_people = fetch_lfw_people(min_faces_per_person=200, resize=1.0)
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/datasets/lfw.py", line 335, in fetch_lfw_people
min_faces_per_person=min_faces_per_person, color=color, slice_=slice_)
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py", line 562, in __call__
return self._cached_call(args, kwargs)[0]
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py", line 510, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py", line 744, in call
output = self.func(*args, **kwargs)
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/datasets/lfw.py", line 236, in _fetch_lfw_people
faces = _load_imgs(file_paths, slice_, color, resize)
File "/home/user/anaconda3/envs/tema4/lib/python3.6/site-packages/sklearn/datasets/lfw.py", line 149, in _load_imgs
raise ImportError("The Python Imaging Library (PIL)"
ImportError: The Python Imaging Library (PIL) is required to load data from jpeg files

Can not import theano after installation windows 10

I've still not been able to resolve this problem, after working on it for a week.
I'm thinking of giving up and just running theano on a virutal machine; there just doesn't seem to be any support out there for Windows 10!
Or am I wrong; is there an easy fix to this?
>>> import theano
Traceback (most recent call last):
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\lazylinker_c.py", line 75, in <module>
raise ImportError()
ImportError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\lazylinker_c.py", line 92, in <module>
raise ImportError()
ImportError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\cmodule.py", line 1784, in _try_compile_tmp
os.remove(exe_path + ".exe")
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\cturn\\AppData\\Local\\Temp\\try_march_3v6ffkv9.exe'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\__init__.py", line 66, in <module>
from theano.compile import (
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\compile\__init__.py", line 10, in <module>
from theano.compile.function_module import *
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\compile\function_module.py", line 21, in <module>
import theano.compile.mode
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\compile\mode.py", line 10, in <module>
import theano.gof.vm
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\vm.py", line 659, in <module>
from . import lazylinker_c
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\lazylinker_c.py", line 125, in <module>
args = cmodule.GCC_compiler.compile_args()
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\cmodule.py", line 2088, in compile_args
default_compilation_result, default_execution_result = try_march_flag(GCC_compiler.march_flags)
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\cmodule.py", line 1856, in try_march_flag
flags=cflags, try_run=True)
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\cmodule.py", line 2188, in try_compile_tmp
comp_args)
File "C:\Users\cturn\Anaconda3\lib\site-packages\theano\theano\gof\cmodule.py", line 1789, in _try_compile_tmp
err += "\n" + str(e)
TypeError: can't concat bytes to str
Um, can't concat bytes to str? What does this mean?
The error you are experiencing appears to result from another sub-process utilizing the same resources as the script you are attempting to write. Although it sounds trivial, I would recommend making sure that you have admin privileges, or at least privileges to the desired resources, and/or restart your computer to kill the sub-process using that module. You could also look in the task manager and kill any/all other processes using python, but that might take longer.
(This may be the program using the "resource" try_march_3v6ffkv9.exe)

KeyError when assigning ''praw.Reddit'' to variable

I could successfully connect to reddit's servers with oauth2 some time ago, but when running my script just now, I get a KeyError followed by a NoSectionError. Code is below followed by exceptions, (The code has been reduced to its essentials).
import praw
# Configuration
APP_UA = 'useragent'
...
...
...
r = praw.Reddit(APP_UA)
Error message:
Traceback (most recent call last):
File "D:\Directory\Python\lib\configparser.py", line 843, in items
d.update(self._sections[section])
KeyError: 'useragent'
A NoSectionError occurred when handling the above exception.
"During handling of the above exception, another exception occurred:"
'Traceback (most recent call last):
File "D:\Directory\Python\Projects\myprj for Reddit, globaloffensive\oddshotcrawler.py", line 19, in <module>
r = praw.Reddit(APP_UA)
File "D:\Directory\Python\lib\site-packages\praw\reddit.py", line 84, in __init__
**config_settings)
File "D:\Directory\Python\lib\site-packages\praw\config.py", line 47, in __init__
raw = dict(Config.CONFIG.items(site_name), **settings)
File "D:\Directory\Python\lib\configparser.py", line 846, in items
raise NoSectionError(section)
configparser.NoSectionError: No section: 'useragent'
[Finished in 0.2s]
Try giving it a user_agent kwarg.
r = praw.Reddit(useragent=APP_UA)

Why I am getting file not found exception while copying file on HDFS using Pydoop?

My Python code using Pydoop API is show below. I have a text file /home/progen/test.txt. While running the script to copy this file to HDFS, I am getting an error.
import pydoop.hdfs as hdfs
local_path = '/home/progen/test.txt'
hdfs_path = '/spark_user/'
host = 'master'
port = 9000
hdfsobj = hdfs.hdfs(host, port, user='spark', groups=['supergroup'])
hdfsobj.copy(local_path, hdfsobj, hdfs_path)
I am getting the Error: "File does no exist"
hdfsCopyImpl(src=/home/progen/test.txt, dst=/spark_user/, deleteSource=0): FileUtil#copy error:
java.io.FileNotFoundException: File does not exist: /home/progen/test.txt
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
Traceback (most recent call last):
File "hdfs.py", line 8, in <module>
hdfsobj.copy(local_path, hdfsobj, hdfs_path)
File "/usr/local/lib/python2.7/dist-packages/pydoop/hdfs/fs.py", line 304, in copy
return self.fs.copy(from_path, to_hdfs, to_path)
hdfsCopyImpl(src=/home/progen/test.txt, dst=/spark_user/, deleteSource=0): FileUtil#copy error:
java.io.FileNotFoundException: File does not exist: /home/progen/test.txt
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
Traceback (most recent call last):
File "hdfs.py", line 8, in <module>
hdfsobj.copy(local_path, hdfsobj, hdfs_path)
File "/usr/local/lib/python2.7/dist-packages/pydoop/hdfs/fs.py", line 304, in copy
return self.fs.copy(from_path, to_hdfs, to_path)

Categories

Resources