This blog article presents a proof-of-concept Python script for Blender that implements a mesh deformation effect on the basis of non-uniform perspective projection. In animated cartoons, it is often the case that objects undergo an extreme non-realistic deformation when they are close to the audience. For instance, when a cartoon character raises her hand towards the camera, the hand is drawn larger than it appears in a photo-realistic picture, with the aim of giving you a feeling that the hand comes right in front of your eyes. The purpose of the present mesh deform script is to achieve this visual effect in Blender.
The basic idea behind the script is to use variable focal length depending on the distance between mesh vertices and the active camera. Focal length is one of camera parameters in Blender that is used with the image aspect ratio to define a perspective projection. With an increasing focal length, objects appear larger in rendering results.
The script deforms mesh objects in a 3D scene by applying a non-uniform perspective projection with variable focal length defined as a function of the distance from the camera to mesh vertices. Since the distance from the camera varies vertex by vertex, mesh objects are deformed as if, for instance, the far side of the 3D scene is seen by a camera with the focal length of 50mm while the near side is seen by another with the focal length of 80mm. A smooth, non-linear interpolation is used for the mapping from the distance to the focal length as illustrated in the following plot, where the horizontal axis is the distance from the camera and the vertical axis is the focal length. The input and output intervals of the mapping (i.e., the min/max distance and the min/max focal length) are user-defined parameters of the deformer.
The following set of images demonstrates the visual effect of non-uniform perspective projection. Two images in the left and right are renders of the same 3D model using a camera with the focal length of 50mm and 80mm, respectively. The image in the middle is a rendering result after the 3D model is deformed by the script. It can be clearly seen that the far side of the model looks like the render was done with the focal length of 50mm, while the near side appears similar to the render with the focal length of 80mm.
Here is the deformation script used for the rendering of the example image in the middle. Known limitations are: (a) the deformation is only applied to mesh objects; (b) all meshes are directly modified in place without making copies of them (so the script is not suitable for animation rendering); and (c) mirror modifiers have to be removed by permanently applying them to meshes. It is also remarked that the script works only with perspective cameras. A future direction is to implement this deformer as a mesh modifier in Blender.
# camera_distortion.py # Tamito KAJIYAMA <2 September 2011> # For each mesh object, apply the following matrices to each of the # mesh vertices in this order: # 1. the 'matrix_world' model-view matrix of the mesh object (the # vertices in the local coordinate system [CS] are projected to the # world CS) # 2. the inverse model-view matrix of the camera (the vertices are # projected to the camera CS) # 3. a non-uniform camera projection matrix with variable focal length # (the mesh is distorted) # 4. the model-view matrix of the camera (the vertices are projected # back to the world CS) # 5. the inverse 'matrix_world' model-view of the mesh object (the # vertices are projected back to the local CS) import bpy from math import * from mathutils import * debug = False # user-defined parameters scene_name = 'Scene' d_near = 10; d_far = 17 fac_near = 1.6; fac_far = 1.0 sce = bpy.data.scenes[scene_name] cam = sce.camera # the model-view matrix of the camera cam_mv = cam.matrix_world # the inverse model-view matrix of the camera cam_mv_inv = cam_mv.copy() cam_mv_inv.invert() fov = cam.data.angle focus = tan(fov / 2.0) near = cam.data.clip_start aspect = float(sce.render.resolution_x) / float(sce.render.resolution_y) if debug: print('fov =', fov, '[rad]') print('focus =', focus) print('near =', near) print('aspect =', aspect) cd_mat = Matrix([ Vector([focus, 0, 0, 0]), Vector([0, aspect * focus, 0, 0]), Vector([0, 0, 65535.0/65536.0, 1]), Vector([0, 0, -near, 0])]) cd_mat_inv = cd_mat.copy() cd_mat_inv.invert() def sigmoid(u): f = 6.0 t = u * 2.0 * f - f return 1.0 / (1.0 + exp(-t)) for ob in sce.objects: # check if the object is a mesh if ob.type != 'MESH': continue # the model-view matrix of the mesh object obj_mv = ob.matrix_world # the inverse model-view matrix of the mesh object obj_mv_inv = obj_mv.copy() obj_mv_inv.invert() # apply the camera distortion to mesh vertices for i in range(len(ob.data.vertices)): p0 = ob.data.vertices[i].co p1 = obj_mv * p0 p2 = cam_mv_inv * p1 # scale 'focus' by a function of distance from the camera d = p2.length dn = (d - d_near) / (d_far - d_near) fac = fac_near + sigmoid(dn) * (fac_far - fac_near) if debug: print('i =', i, 'd =', d, 'fac =', fac) cd_mat = fac * focus cd_mat = fac * focus * aspect p3 = (cd_mat_inv * cd_mat) * p2 p4 = cam_mv * p3 p5 = obj_mv_inv * p4 ob.data.vertices[i].co = p5