February 17, 2014

Yukicotan: A capability study of Freestyle (Blender 2.69)

Filed under: Blender, Freestyle — blenderyard @ 2:57 PM

A capability study of Freestyle (Blender 2.69) was done using a 3D model recently released at

Yukicotan_test1b_tn Yukicotan_test1a_tn Yukicotan_mesh_tn

The image to the left is a simple render composed of silhouettes (thin lines) and external contours (thick lines).  The image in the middle consists of the same silhouettes and external contours as well as suggestive contours (dotted lines).  The image to the right shows the mesh data (841K faces) imported into Blender.  No textures were included in the original 3D model (specifically prepared for 3D printing), so that only the shaded surface and auto-generated lines are shown in the Freestyle renders.

February 4, 2014

Anisotropic line thickness in Freestyle

Filed under: Blender, Freestyle — blenderyard @ 11:31 AM


The image above shows the effects of anisotropic line thickness implemented in Freestyle for Blender in the form of a custom style module written in Python.  The idea is to increase line thickness when the stroke travels in a specific direction (expressed by the angle of the tangent line from the X axis evaluated at individual stroke vertices). Here the mapping from the stroke traveling direction to line thickness was expressed by pairs of angle and thickness values to allow artists to place thickness peaks at arbitrary directions (hard-coded in the script for now), and linear interpolation was used to ensure smooth thickness changes.  In the test renders above, the anisotropic thickness shader was applied only to external contours.  Other lines have a constant thickness.

The full listing of the style module used for the test renders is as follows:

from freestyle import *
from logical_operators import *
from shaders import *
import math

class AnisotropicThicknessShader(StrokeShader):
    def __init__(self):

    def shade(self, stroke):
        if True:
            angles = [-180, -135, -90, -45, 0, 45, 90, 135, 180]
            thickness = [1, 5, 1, 5, 1, 5, 1, 5, 1]
        if True:
            angles = [-180, -120, -60, 0, 60, 120, 180]
            thickness = [1, 5, 1, 5, 1, 5, 1]
        if not True:
            angles = [-180, -90, 0, 90, 180]
            thickness = [1, 5, 1, 5, 1]
        if not True:
            angles = [-180, -45, 45, 135, 180]
            thickness = [1, 1, 5, 1, 1]
        if not True:
            angles = [-180, 0, 180]
            thickness = [1, 5, 1]
        f = Normal2DF0D()
        it = stroke.stroke_vertices_begin()
        while not it.is_end:
            n = -f(Interface0DIterator(it)) # normal
            a = math.atan2(n[1], n[0]) # angle in radians
            a = a / math.pi * 180 # angle in degrees
            # linear interpolation
            for i in range(1, len(angles)):
                if angles[i-1] <= a <= angles[i]:
            r = (a - angles[i-1]) / (angles[i] - angles[i-1])
            t = thickness[i-1] + r * (thickness[i] - thickness[i-1])
            it.object.attribute.thickness = (t/2, t/2)

upred = AndUP1D(ExternalContourUP1D(), QuantitativeInvisibilityUP1D(0))
Operators.bidirectional_chain(ChainSilhouetteIterator(), NotUP1D(upred))
shaders_list = [
Operators.create(TrueUP1D(), shaders_list)

The script was tested with Blender 2.69 (may require code updates for Blender 2.70 and later).

September 7, 2011

Mesh deform with non-uniform perspective projection

Filed under: Blender — blenderyard @ 7:22 PM

This blog article presents a proof-of-concept Python script for Blender that implements a mesh deformation effect on the basis of non-uniform perspective projection.  In animated cartoons, it is often the case that objects undergo an extreme non-realistic deformation when they are close to the audience.  For instance, when a cartoon character raises her hand towards the camera, the hand is drawn larger than it appears in a photo-realistic picture, with the aim of giving you a feeling that the hand comes right in front of your eyes.  The purpose of the present mesh deform script is to achieve this visual effect in Blender.

The basic idea behind the script is to use variable focal length depending on the distance between mesh vertices and the active camera.  Focal length is one of camera parameters in Blender that is used with the image aspect ratio to define a perspective projection.  With an increasing focal length, objects appear larger in rendering results.

The script deforms mesh objects in a 3D scene by applying a non-uniform perspective projection with variable focal length defined as a function of the distance from the camera to mesh vertices.  Since the distance from the camera varies vertex by vertex, mesh objects are deformed as if, for instance, the far side of the 3D scene is seen by a camera with the focal length of 50mm while the near side is seen by another with the focal length of 80mm.  A smooth, non-linear interpolation is used for the mapping from the distance to the focal length as illustrated in the following plot, where the horizontal axis is the distance from the camera and the vertical axis is the focal length.   The input and output intervals of the mapping (i.e., the min/max distance and the min/max focal length) are user-defined parameters of the deformer.

The following set of images demonstrates the visual effect of non-uniform perspective projection.  Two images in the left and right are renders of the same 3D model using a camera with the focal length of 50mm and 80mm, respectively.  The image in the middle is a rendering result after the 3D model is deformed by the script.  It can be clearly seen that the far side of the model looks like the render was done with the focal length of 50mm, while the near side appears similar to the render with the focal length of 80mm.


Here is the deformation script used for the rendering of the example image in the middle.  Known limitations are: (a) the deformation is only applied to mesh objects; (b) all meshes are directly modified in place without making copies of them (so the script is not suitable for animation rendering); and (c) mirror modifiers have to be removed by permanently applying them to meshes.  It is also remarked that the script works only with perspective cameras.  A future direction is to implement this deformer as a mesh modifier in Blender.

# Tamito KAJIYAMA <2 September 2011>

# For each mesh object, apply the following matrices to each of the
# mesh vertices in this order:
# 1. the 'matrix_world' model-view matrix of the mesh object (the
#    vertices in the local coordinate system [CS] are projected to the
#    world CS)
# 2. the inverse model-view matrix of the camera (the vertices are
#    projected to the camera CS)
# 3. a non-uniform camera projection matrix with variable focal length
#    (the mesh is distorted)
# 4. the model-view matrix of the camera (the vertices are projected
#    back to the world CS)
# 5. the inverse 'matrix_world' model-view of the mesh object (the
#    vertices are projected back to the local CS)

import bpy
from math import *
from mathutils import *

debug = False

# user-defined parameters
scene_name = 'Scene'
d_near = 10; d_far = 17
fac_near = 1.6; fac_far = 1.0

sce =[scene_name]
cam =

# the model-view matrix of the camera
cam_mv = cam.matrix_world
# the inverse model-view matrix of the camera
cam_mv_inv = cam_mv.copy()

fov =
focus = tan(fov / 2.0)
near =
aspect = float(sce.render.resolution_x) / float(sce.render.resolution_y)
if debug:
    print('fov =', fov, '[rad]')
    print('focus =', focus)
    print('near =', near)
    print('aspect =', aspect)
cd_mat = Matrix([
        Vector([focus, 0, 0, 0]),
        Vector([0, aspect * focus, 0, 0]),
        Vector([0, 0, 65535.0/65536.0, 1]),
        Vector([0, 0, -near, 0])])
cd_mat_inv = cd_mat.copy()

def sigmoid(u):
    f = 6.0
    t = u * 2.0 * f - f
    return 1.0 / (1.0 + exp(-t))

for ob in sce.objects:
    # check if the object is a mesh
    if ob.type != 'MESH':
    # the model-view matrix of the mesh object
    obj_mv = ob.matrix_world
    # the inverse model-view matrix of the mesh object
    obj_mv_inv = obj_mv.copy()
    # apply the camera distortion to mesh vertices
    for i in range(len(
        p0 =[i].co
        p1 = obj_mv * p0
        p2 = cam_mv_inv * p1
        # scale 'focus' by a function of distance from the camera
        d = p2.length
        dn = (d - d_near) / (d_far - d_near)
        fac = fac_near + sigmoid(dn) * (fac_far - fac_near)
        if debug:
            print('i =', i, 'd =', d, 'fac =', fac)
        cd_mat[0][0] = fac * focus
        cd_mat[1][1] = fac * focus * aspect
        p3 = (cd_mat_inv * cd_mat) * p2
        p4 = cam_mv * p3
        p5 = obj_mv_inv * p4[i].co = p5
« Newer Posts

Create a free website or blog at