Blenderyard

July 4, 2014

Freestyle renders of a 3D scanned ram skull

Filed under: Uncategorized — blenderyard @ 4:20 PM

The following images are Freestyle renders of a 3D scanned ram skull model (available at the Ten24 3D Scan Store).  The rendering was done with Blender 2.71.  Short hatching-like lines are suggestive contours smoothed by the Bezier Curve geometry modifier and tapered by the Along Stroke thickness modifier.

ram_skull_output1_tn

ram_skull_output2_tn

ram_skull_output3_tn

April 14, 2014

World’s first fanzine on Blender and Freestyle (2013)

Filed under: Uncategorized — blenderyard @ 3:31 PM

suzanneclub_tn

I finally got a copy of the world’s first fanzine on Blender and Freestyle published in February 2013 (note that the publication date was even before the first Blender release that shipped Freestyle).  Many thanks to @blekei, @mato_sus304 and @tksg8086 for the wonderful publication!  This is absolutely rewarding!

April 10, 2014

Fancy line stylization tryouts

Filed under: Blender, Freestyle — blenderyard @ 2:57 PM

fancy_styles_4up_tn

The image above shows the outcome of 4 fancy line styles, all done using Freestyle in the Parameter Editor mode (without Python scripting) in Blender 2.70.

February 17, 2014

Yukicotan: A capability study of Freestyle (Blender 2.69)

Filed under: Blender, Freestyle — blenderyard @ 2:57 PM

A capability study of Freestyle (Blender 2.69) was done using a 3D model recently released at http://www.yukicocp.com/.

Yukicotan_test1b_tn Yukicotan_test1a_tn Yukicotan_mesh_tn

The image to the left is a simple render composed of silhouettes (thin lines) and external contours (thick lines).  The image in the middle consists of the same silhouettes and external contours as well as suggestive contours (dotted lines).  The image to the right shows the mesh data (841K faces) imported into Blender.  No textures were included in the original 3D model (specifically prepared for 3D printing), so that only the shaded surface and auto-generated lines are shown in the Freestyle renders.

February 4, 2014

Anisotropic line thickness in Freestyle

Filed under: Blender, Freestyle — blenderyard @ 11:31 AM

anisotropic_thickness_comparison_tn

The image above shows the effects of anisotropic line thickness implemented in Freestyle for Blender in the form of a custom style module written in Python.  The idea is to increase line thickness when the stroke travels in a specific direction (expressed by the angle of the tangent line from the X axis evaluated at individual stroke vertices). Here the mapping from the stroke traveling direction to line thickness was expressed by pairs of angle and thickness values to allow artists to place thickness peaks at arbitrary directions (hard-coded in the script for now), and linear interpolation was used to ensure smooth thickness changes.  In the test renders above, the anisotropic thickness shader was applied only to external contours.  Other lines have a constant thickness.

The full listing of the style module used for the test renders is as follows:

from freestyle import *
from logical_operators import *
from shaders import *
import math

class AnisotropicThicknessShader(StrokeShader):
    def __init__(self):
        StrokeShader.__init__(self)

    def shade(self, stroke):
        if True:
            angles = [-180, -135, -90, -45, 0, 45, 90, 135, 180]
            thickness = [1, 5, 1, 5, 1, 5, 1, 5, 1]
        if True:
            angles = [-180, -120, -60, 0, 60, 120, 180]
            thickness = [1, 5, 1, 5, 1, 5, 1]
        if not True:
            angles = [-180, -90, 0, 90, 180]
            thickness = [1, 5, 1, 5, 1]
        if not True:
            angles = [-180, -45, 45, 135, 180]
            thickness = [1, 1, 5, 1, 1]
        if not True:
            angles = [-180, 0, 180]
            thickness = [1, 5, 1]
        f = Normal2DF0D()
        it = stroke.stroke_vertices_begin()
        while not it.is_end:
            n = -f(Interface0DIterator(it)) # normal
            a = math.atan2(n[1], n[0]) # angle in radians
            a = a / math.pi * 180 # angle in degrees
            # linear interpolation
            for i in range(1, len(angles)):
                if angles[i-1] <= a <= angles[i]:
                    break
            r = (a - angles[i-1]) / (angles[i] - angles[i-1])
            t = thickness[i-1] + r * (thickness[i] - thickness[i-1])
            it.object.attribute.thickness = (t/2, t/2)
            it.increment()

upred = AndUP1D(ExternalContourUP1D(), QuantitativeInvisibilityUP1D(0))
Operators.select(upred)
Operators.bidirectional_chain(ChainSilhouetteIterator(), NotUP1D(upred))
shaders_list = [
    SamplingShader(2),
    AnisotropicThicknessShader(),
    pyMaterialColorShader(0.5),
    ]
Operators.create(TrueUP1D(), shaders_list)

The script was tested with Blender 2.69 (may require code updates for Blender 2.70 and later).

September 7, 2011

Mesh deform with non-uniform perspective projection

Filed under: Blender — blenderyard @ 7:22 PM

This blog article presents a proof-of-concept Python script for Blender that implements a mesh deformation effect on the basis of non-uniform perspective projection.  In animated cartoons, it is often the case that objects undergo an extreme non-realistic deformation when they are close to the audience.  For instance, when a cartoon character raises her hand towards the camera, the hand is drawn larger than it appears in a photo-realistic picture, with the aim of giving you a feeling that the hand comes right in front of your eyes.  The purpose of the present mesh deform script is to achieve this visual effect in Blender.

The basic idea behind the script is to use variable focal length depending on the distance between mesh vertices and the active camera.  Focal length is one of camera parameters in Blender that is used with the image aspect ratio to define a perspective projection.  With an increasing focal length, objects appear larger in rendering results.

The script deforms mesh objects in a 3D scene by applying a non-uniform perspective projection with variable focal length defined as a function of the distance from the camera to mesh vertices.  Since the distance from the camera varies vertex by vertex, mesh objects are deformed as if, for instance, the far side of the 3D scene is seen by a camera with the focal length of 50mm while the near side is seen by another with the focal length of 80mm.  A smooth, non-linear interpolation is used for the mapping from the distance to the focal length as illustrated in the following plot, where the horizontal axis is the distance from the camera and the vertical axis is the focal length.   The input and output intervals of the mapping (i.e., the min/max distance and the min/max focal length) are user-defined parameters of the deformer.

The following set of images demonstrates the visual effect of non-uniform perspective projection.  Two images in the left and right are renders of the same 3D model using a camera with the focal length of 50mm and 80mm, respectively.  The image in the middle is a rendering result after the 3D model is deformed by the script.  It can be clearly seen that the far side of the model looks like the render was done with the focal length of 50mm, while the near side appears similar to the render with the focal length of 80mm.

   

Here is the deformation script used for the rendering of the example image in the middle.  Known limitations are: (a) the deformation is only applied to mesh objects; (b) all meshes are directly modified in place without making copies of them (so the script is not suitable for animation rendering); and (c) mirror modifiers have to be removed by permanently applying them to meshes.  It is also remarked that the script works only with perspective cameras.  A future direction is to implement this deformer as a mesh modifier in Blender.

# camera_distortion.py
# Tamito KAJIYAMA <2 September 2011>

# For each mesh object, apply the following matrices to each of the
# mesh vertices in this order:
# 1. the 'matrix_world' model-view matrix of the mesh object (the
#    vertices in the local coordinate system [CS] are projected to the
#    world CS)
# 2. the inverse model-view matrix of the camera (the vertices are
#    projected to the camera CS)
# 3. a non-uniform camera projection matrix with variable focal length
#    (the mesh is distorted)
# 4. the model-view matrix of the camera (the vertices are projected
#    back to the world CS)
# 5. the inverse 'matrix_world' model-view of the mesh object (the
#    vertices are projected back to the local CS)

import bpy
from math import *
from mathutils import *

debug = False

# user-defined parameters
scene_name = 'Scene'
d_near = 10; d_far = 17
fac_near = 1.6; fac_far = 1.0

sce = bpy.data.scenes[scene_name]
cam = sce.camera

# the model-view matrix of the camera
cam_mv = cam.matrix_world
# the inverse model-view matrix of the camera
cam_mv_inv = cam_mv.copy()
cam_mv_inv.invert()

fov = cam.data.angle
focus = tan(fov / 2.0)
near = cam.data.clip_start
aspect = float(sce.render.resolution_x) / float(sce.render.resolution_y)
if debug:
    print('fov =', fov, '[rad]')
    print('focus =', focus)
    print('near =', near)
    print('aspect =', aspect)
cd_mat = Matrix([
        Vector([focus, 0, 0, 0]),
        Vector([0, aspect * focus, 0, 0]),
        Vector([0, 0, 65535.0/65536.0, 1]),
        Vector([0, 0, -near, 0])])
cd_mat_inv = cd_mat.copy()
cd_mat_inv.invert()

def sigmoid(u):
    f = 6.0
    t = u * 2.0 * f - f
    return 1.0 / (1.0 + exp(-t))

for ob in sce.objects:
    # check if the object is a mesh
    if ob.type != 'MESH':
        continue
    # the model-view matrix of the mesh object
    obj_mv = ob.matrix_world
    # the inverse model-view matrix of the mesh object
    obj_mv_inv = obj_mv.copy()
    obj_mv_inv.invert()
    # apply the camera distortion to mesh vertices
    for i in range(len(ob.data.vertices)):
        p0 = ob.data.vertices[i].co
        p1 = obj_mv * p0
        p2 = cam_mv_inv * p1
        # scale 'focus' by a function of distance from the camera
        d = p2.length
        dn = (d - d_near) / (d_far - d_near)
        fac = fac_near + sigmoid(dn) * (fac_far - fac_near)
        if debug:
            print('i =', i, 'd =', d, 'fac =', fac)
        cd_mat[0][0] = fac * focus
        cd_mat[1][1] = fac * focus * aspect
        p3 = (cd_mat_inv * cd_mat) * p2
        p4 = cam_mv * p3
        p5 = obj_mv_inv * p4
        ob.data.vertices[i].co = p5

December 29, 2010

X-32 JSF

Filed under: Freestyle — blenderyard @ 11:16 AM

A personalized jet fighter in the Joint Strike Fighter (JSF) contest.  The character design was originally made by an anonymous artist (link).  Modeled and rendered with Blender/Freestyle in August 2009.  Speed lines in the fourth image were drawn based on Rylan’s tips.

December 26, 2010

Blender 2.49 splash screen contest

Filed under: Freestyle — blenderyard @ 11:04 AM

My entry to the Blender 2.49 splash screen contest in April 2009.  Modeled and rendered with Blender/Freestyle and retouched with GIMP.

The image on the left is a screenshot that shows the 3D model, material nodes for the halftone effects, and composite nodes for superimposing gradient colors onto the monochrome halftone and line drawing.  On the right is a raw rendering output before retouching.

December 15, 2010

Initial Freestyle rendering results

Filed under: Freestyle — blenderyard @ 12:52 AM

Here are some old renders using early versions of Freestyle for Blender.  These images were initial rendering results of mine by means of the new NPR rendering functionality tested just after the first Win32 build of the Blender Freestyle branch came out in September 2008.  The character design was originally made by an anonymous artist.

These are renders with colored lines using pyMaterialColorShader() on top of shadeless surface colors.  The style module used for these renders is available here in the BlenderArtists Freestyle thread.  At the time when these renders were made, the Freestyle branch was based on the 2.4x code base and anti-aliased lines were not supported yet.  A common workaround was to render a larger image than a final size and scale it down.

These are results of a constant line color combined with halftone surface colors by means of PyNodes.  The halftone script used for these renders is available here in the BlenderArtists PyNode cookbook thread.

December 11, 2010

Lilies

Filed under: Freestyle — blenderyard @ 3:50 PM

An old render of lilies dated back to March 2009.  Modeled and rendered with Blender Freestyle branch, using a slightly modified qi0.py to get material-based stroke colors through pyMaterialColorShader.

A test render of group-based feature edge selection.

A render using sketch_topology_broken.py.

The last two results were made with Blender Freestyle branch revision 33554.

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.