how to make python ursina engine's cube texture - python

I am wondering about how to make a simple cube texture in the Python Panda3D wrapper, ursina engine library.
Here's the code that I have tried.
from ursina import *
app = Ursina()
window.fps_counter.enabled = False
class Voxel(Button):
def __init__(self, position=(0,0,0), texture='red.png'):
super().__init__(
parent = scene,
position=position,
model='cube',
origin_y=0.5,
texture=texture,
color=color.color(0,0, random.uniform(0.9, 1.0)),
scale = 1.0
)
for x in range(3):
for y in range(3):
for z in range(3):
voxel = Voxel(position=(x,y,z))
EditorCamera()
app.run()
I want to know how to make uv map ( one that I made is really bad ).
I want to know convert uv map to ursina engine cube texture.

Built in textures are great for Ursina Voxel engines. They include:
'arrow_down'
'arrow_right'
'brick'
'circle'
'circle_outlined'
'cobblestone'
'cursor'
'file_icon'
'folder'
'grass'
'heightmap_1'
'horizontal_gradient'
'noise'
'radial_gradient'
'reflection_map_3'
'shore'
'sky_default'
'sky_sunset'
'ursina_logo'
'ursina_wink_0000'
'ursina_wink_0001'
'vertical_gradient'
'white_cube'
They are all great textures. But if you don't want to use built in textures, use a .jpg .psd or .png file, but without the file extension.
e.g.
My_Image.png would be imported as...
Entity(model=cube,
texture='My_Image')
If you have multiple files that have the same name but different file extensions,
e.g.
My_Image.png
and
My_Image.jpg
and both of them are compatible, it will use the first file in the directory.
I hope this helps. If so, please approve this so others can see that it works. It worked for me and I hope this improves your programs!

There is no such thing as ursina specific textures. When importing a texture, just use texture='cool_file' without the extension (.jpg, .png and .psd). Ursina currently supports png, jpeg and psd images (the psd-tools library must be installed since it isn't by default).
If you want to do uv mapping, you need to directly do it in a 3D software (e.g. Blender), and just import the model.

Related

Ursina models scale

I'm building a game in ursina and for some Entity's I use my own texture images models->
I did try all models type offered by the framework (ex: quad,cube etc...)[for example my cat hero is taking damage from the red box even if its not actually hitting because the obj is much bigger than the image..its there a way to scale it?)
But I'm not able to scale them properly so the actual entity obj is mush bigger than the image:
Please see the image bellow:
The code for those 3 entities is here:
wall = Entity(model='quad', scale=(2,3), x=-3,
collider='box', color=color.white,texture='images/cat_tower.png')
level = Entity(model='quad', color=color.white, scale=(3, 1), x=4, collider='box',texture='images/cat_slider_1')
trap = Entity(model='quad', scale=(2,2, 2), x=-5, y=1, collider='box', texture=f'images/trap.png',color=color.red)
For anyone having this problem seems that the problem was not in the code itself ( or ursina framework) but in the way I edited the PNG/JPG images objects-> even if I did not have the TRANSPARENCY set on each of them, seems that the issue was when I did crop them in Paint3D and set the transparency off I need to scale the image to fit totally in the object inside paint 3d white scheme( not to the scale I have it after I did magic select and crop it..)

custom animation in ursina engine

I am trying to play my own animation in the ursina engine but I have no idea how to. According to the documentation, I would need this code:
from ursina import *
app = Ursina()
window.color = color._20
animation = Animation('ursina_wink', fps=2, scale=1, filtering=None, autoplay=True)
EditorCamera()
app.run()
And that one does indeed work. However, I do not understand how I can replace the default animation with my own frames ('ursina_wink' is inbuilt).
The documentation says there is a frame parameter, so here is what I tried so far:
from ursina import *
app = Ursina()
window.color = color._20
coin1 = load_texture('animation/coin1')
coin2 = load_texture('animation/coin2')
coin3 = load_texture('animation/coin3')
coin4 = load_texture('animation/coin4')
coin5 = load_texture('animation/coin5')
coin6 = load_texture('animation/coin6')
animation = Animation('test',fps=2, scale=1, filtering=None, autoplay=True, frames = (coin1, coin2,coin3,coin4,coin5,coin6))
EditorCamera()
app.run()
But I cannot see anything on the screen and I cannot find an example online for it. Would really appreciate some help.
It loads an image sequence as a frame animation. You can load it like this:
Animation('coin') and it will take all the frames starting with 'coin' and load them alphabetically.
I will clarify this in the documentation.
I have come across your error and I have a simple solution, try to convert the images into a gif because in the ursina documentation it says that you can also use a gif wich is way more simple but less customizable, here's the text:
Loads an image sequence as a frame animation.
Consider using SpriteSheetAnimation instead if possible.
So if you have some frames named image_000.png, image_001.png, image_002.png and so on,
you can load it like this: Animation('image')
You can also load a .gif by including the file type: Animation('image.gif')
So, I hope this helped you.

Rendering in blender dont take rotation updates of camera and light directions

I'm trying to render a cube (default blender scene) with a camera facing it. I have added a spotlight at the same location as the camera. Spotlight direction also faces towards the cube.
When I render, location changes take effect for both camera and spotlight but, rotations don't. scene context update is deprecated now. I have seen other update answers, but they don't seem to help.
I have done some workarounds and they seem to work, but this is not the correct way.
If I render the same set of commands twice (in a loop), I get the correct render.
If I run the script from the blender's python console (only once), I get the correct render. But If the same code is run as a script inside the blender, render is again wrong.
import pdb
import numpy as np
import bpy
def look_at(obj_camera, point):
loc_camera = obj_camera.matrix_world.to_translation()
direction = point - loc_camera
rot_quat = direction.to_track_quat('-Z', 'Y')
obj_camera.rotation_euler = rot_quat.to_euler()
data_path='some folder'
locs=np.array([ 0.00000000e+00, -1.00000000e+01, 3.00000011e-06]) #Assume, (I have big array where camera and spotlight needs to be placed, and then made to look towards cube)
obj_camera = bpy.data.objects["Camera"]
obj_other = bpy.data.objects["Cube"]
bpy.data.lights['Light'].type='SPOT'
obj_light=bpy.data.objects['Light']
loc=locs
i=0
##### if I run following lines two times, correct render is obtained.
obj_camera.location = loc
obj_light.location= obj_camera.location
look_at(obj_light, obj_other.matrix_world.to_translation())
look_at(obj_camera, obj_other.matrix_world.to_translation())
bpy.context.scene.render.filepath = data_path+'image_{}.png'.format(i)
bpy.ops.render.render(write_still = True)
You might need to call bpy.context.view_layer.update() (bpy.context.scene.update() with older versions than blender 2.8) after changing the camera orientation by obj_camera.rotation_euler = rot_quat.to_euler() and make sure that the layers that are going to be rendered are active when calling update() (see here https://blender.stackexchange.com/questions/104958/object-locations-not-updating-before-render-python).
(A bit late ;-) but this was one of the rare questions I found for a related issue.)

Turtle module - Saving an image

I would like to figure out how to save a bitmap or vector graphics image after creating a drawing with python's turtle module. After a bit of googling I can't find an easy answer. I did find a module called canvas2svg, but I'm very new to python and I don't know how to install the module. Is there some built in way to save images of the turtle canvas? If not where do I put custom modules for python on an Ubuntu machine?
from tkinter import * # Python 3
#from Tkinter import * # Python 2
import turtle
turtle.forward(100)
ts = turtle.getscreen()
ts.getcanvas().postscript(file="duck.eps")
This will help you; I had the same problem, I Googled it, but solved it by reading the source of the turtle module.
The canvas (tkinter) object has the postscript function; you can use it.
The turtle module has "getscreen" which gives you the "turtle screen" which gives you the Tiknter canvas in which the turtle is drawing.
This will save you in encapsulated PostScript format, so you can use it in GIMP for sure but there are other viewers too. Or, you can Google how to make a .gif from this. You can use the free and open source Inkscape application to view .eps files as well, and then save them to vector or bitmap image files.
I wrote the svg-turtle package that supports the standard Turtle interface from Python, and writes an SVG file using the svgwrite module. Install it with pip install svg-turtle, and then call it like this:
from svg_turtle import SvgTurtle
def draw_spiral(t):
t.fillcolor('blue')
t.begin_fill()
for i in range(20):
d = 50 + i*i*1.5
t.pencolor(0, 0.05*i, 0)
t.width(i)
t.forward(d)
t.right(144)
t.end_fill()
def write_file(draw_func, filename, width, height):
t = SvgTurtle(width, height)
draw_func(t)
t.save_as(filename)
def main():
write_file(draw_spiral, 'example.svg', 500, 500)
print('Done.')
if __name__ == '__main__':
main()
The canvasvg package is another option. After you run some turtle code, it will convert all the items on the tkinter canvas into an SVG file. This requires tkinter support and a display, where svg-turtle doesn't.

Digital Image cropping in Python

Got this question from a professor, a physicist.
I am a beginner in Python programming. I am not a computer professional I am a physicist. I was trying to write a code in python for my own research which involves a little image processing.
All I need to do is to display an image and then select a region of interest using my mouse and finally crop out the selected region. I can do this in Matlab using the ginput() function.
I tried using PIL. But I find that after I issue the command Image.show(), the image is displayed but then the program halts there unless I exit from the image window. Is there any way to implement what I was planning. Do I need to download any other module? Please advise.
While I agree with David that you should probably just use GIMP or some other image manipulation program, here is a script (as I took it to be an exercise to the reader) using pygame that does what you want. You will need to install pygame as well as the PIL, usage would be:
scriptname.py <input_path> <output_path>
Actual script:
import pygame, sys
from PIL import Image
pygame.init()
def displayImage( screen, px, topleft):
screen.blit(px, px.get_rect())
if topleft:
pygame.draw.rect( screen, (128,128,128), pygame.Rect(topleft[0], topleft[1], pygame.mouse.get_pos()[0] - topleft[0], pygame.mouse.get_pos()[1] - topleft[1]))
pygame.display.flip()
def setup(path):
px = pygame.image.load(path)
screen = pygame.display.set_mode( px.get_rect()[2:] )
screen.blit(px, px.get_rect())
pygame.display.flip()
return screen, px
def mainLoop(screen, px):
topleft = None
bottomright = None
runProgram = True
while runProgram:
for event in pygame.event.get():
if event.type == pygame.QUIT:
runProgram = False
elif event.type == pygame.MOUSEBUTTONUP:
if not topleft:
topleft = event.pos
else:
bottomright = event.pos
runProgram = False
displayImage(screen, px, topleft)
return ( topleft + bottomright )
if __name__ == "__main__":
screen, px = setup(sys.argv[1])
left, upper, right, lower = mainLoop(screen, px)
im = Image.open(sys.argv[1])
im = im.crop(( left, upper, right, lower))
im.save(sys.argv[2])
Hope this helps :)
For what it's worth (coming from another physicist), I would just do this in an image processing program like the GIMP. The main benefit of doing this task in Python (or any language) would be to save time by automating the process, but unless you - well, the professor - can somehow develop an algorithm to automatically figure out what part of the image to crop, there doesn't seem to be much time to be saved by automation.
If I remember correctly, GIMP is actually scriptable, possibly with Python, so it might be possible to write a time-saving GIMP script to do what your professor friend wants.
Image.show() just calls whatever simple picture viewer it can find on the current platform, one that may or may not have a crop-and-save facility.
If you are on a Windows box and you just need to make it work on your machine, set the ‘Open with...’ association to make it so running an image loads it into an editor of your choice. On OS X and *nix you'd want to hack the _showxv() method at the bottom of Image.py to change the command used to open the image.
If you do actually need to provide a portable solution, you'll need to use a UI framework to power your cropping application. The choices boil down to Tkinter (ImageTk.py gives you a wrapper for displaying PIL images in Tk), PyQT4 (ImageQt in PIL 1.1.6 gives you a wrapper for displaying images in QT4) or wxPython (a higher-level application authoring toolkit using wxWidgets). It'll be quite a bit of work to get the hang of a full UI kit, but you'll be able to completely customise how your application's interface will work.
Is there a script in python like a library to auto crop images :
Automatically crop image
What you are looking for is the module: matplotlib, it emulates Matlab. See the ginput() function. That allow you to find the bounding box, then you can use crop from PIL.
http://matplotlib.sourceforge.net/api/figure_api.html

Categories

Resources