Blackbird

yy, 加油


  • Home

  • About

  • Categories

  • Archives

GAMES101 作业8 质点弹簧系统 踩坑指南

Posted on 2024-05-28 |

依赖库安装

使用以下命令安装

1
sudo apt install libglu1-mesa-dev freeglut3-dev mesa-common-dev xorg-dev

不要直接从pdf上复制命令,pdf上的横线符号是错误的,会导致 unable to locate

段错误

来自在 Win10 下配置 GAMES101 开发环境(WSL2) - 知乎 (zhihu.com)
1.执行

1
export LIBGL_ALWAYS_INDIRECT=0

2.下载MobaXterm,作为终端启动ropesim

但是,我的MobaXterm中只有一个WSL,上文提到图形界面显示失败的问题并未解决。

下面是通过StackOverflow等摸索而来:

GLFW Error: Linux: Failed to watch for joystick…

执行

1
touch ~/.Xauthority 

然后重启MobaXterm,

然后执行

1
sudo cp ~/.Xauthority  /root/

然后使用sudo打开ropesim

1
sudo ./ropesim

虽然仍会显示GLFW Error,但能够成功显示窗口。

image-20240528170135716

弹簧乱飞

参考:

关于作业8的一些问题解答 – 计算机图形学与混合现实在线平台 (games-cn.org)

对于显示欧拉法,是正常的,减小步长(如 sudo ./ropesim -s 1024 )可以减缓发散的时间(但还是会发散)

对于Verlet方法,要在计算每个质点后把 m->forces清零,上面simulateEuler函数中已经给出,此处需要自己加上。

完整代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
#include <iostream>
#include <vector>

#include "CGL/vector2D.h"

#include "mass.h"
#include "rope.h"
#include "spring.h"

namespace CGL {

Rope::Rope(Vector2D start, Vector2D end, int num_nodes, float node_mass, float k, vector<int> pinned_nodes)
{
// TODO (Part 1): Create a rope starting at `start`, ending at `end`, and containing `num_nodes` nodes.


// Def of rope:
// vector<Mass *> masses;
// vector<Spring *> springs;

/*
Mass(Vector2D position, float mass, bool pinned)
: start_position(position), position(position), last_position(position),
mass(mass), pinned(pinned) {}
Spring(Mass *a, Mass *b, float k)
: m1(a), m2(b), k(k), rest_length((a->position - b->position).norm()) {}
*/
for (int i = 0; i < num_nodes; i++) {
Vector2D pos = start + (end - start) * (1.0 * i / (num_nodes - 1));
masses.emplace_back(new Mass(pos, node_mass, false));
}

for (int i = 0; i < num_nodes - 1; i++) {
springs.emplace_back(new Spring(masses[i], masses[i + 1], k));
}

// Comment-in this part when you implement the constructor
for (auto &i : pinned_nodes) {
masses[i]->pinned = true;
}
}

void Rope::simulateEuler(float delta_t, Vector2D gravity)
{
for (auto &s : springs)
{
// TODO (Part 2): Use Hooke's law to calculate the force on a node
auto len = (s->m1->position - s->m2->position).norm();
s->m1->forces += - s->k * (s->m1->position - s->m2->position) / len * (len - s->rest_length);
s->m2->forces += - s->k * (s->m2->position - s->m1->position) / len * (len - s->rest_length);
}

for (auto &m : masses)
{
if (!m->pinned)
{
// TODO (Part 2): Add the force due to gravity, then compute the new velocity and position
auto a = m->forces / m->mass + gravity;
float kd = 0.005; a += - kd * m->velocity / m->mass; // TODO (Part 2): Add global damping
auto v_t = m->velocity;
m->velocity += a * delta_t;
//m->position += v_t * delta_t; //Explicit Method 不收敛
m->position += m->velocity * delta_t; // Semi-implicit method
}

// Reset all forces on each mass
m->forces = Vector2D(0, 0);
}
}

void Rope::simulateVerlet(float delta_t, Vector2D gravity)
{
for (auto &s : springs)
{
// TODO (Part 3): Simulate one timestep of the rope using explicit Verlet (solving constraints)
auto len = (s->m1->position - s->m2->position).norm();
s->m1->forces += -s->k * (s->m1->position - s->m2->position) / len * (len - s->rest_length);
s->m2->forces += -s->k * (s->m2->position - s->m1->position) / len * (len - s->rest_length);
}

for (auto &m : masses)
{
if (!m->pinned)
{
Vector2D temp_position = m->position;
// TODO (Part 3.1): Set the new position of the rope mass
// TODO (Part 4): Add global Verlet damping
auto a = m->forces / m->mass + gravity;
float damping_factor = 0.00000000005;
m->position = temp_position + (1 - damping_factor) * (temp_position - m->last_position) + a * delta_t * delta_t;
m->last_position = temp_position;
}
m->forces = Vector2D(0, 0);
}
}
}

image-20240528180227291

TinyRendererNotes

Posted on 2024-05-06 |

TinyRender学习笔记

通过手写软光栅渲染器加深对计算机图形学基本原理的理解,并练习C++面向对象程序设计。

该项目主要参考Home · ssloy/tinyrenderer Wiki (github.com)编写,使用CMake构建

可以浏览我的历史commit,找到不同进度时提交的代码。

本项目涉及的几乎所有的图形学知识都在GAMES101课程中出现过,推荐将GAMES101作为前置课程,或配合GAMES101的进度一起学习。

Lesson 0 Getting Started

Using TGA image format

使用这个基本框架来生成TGA格式图像:
ssloy/tinyrenderer at 909fe20934ba5334144d2c748805690a1fa4c89f (github.com)

只需 #include "tgaimage.h" ,并在编译时链接tgaimage.cpp即可。

例:在屏幕上将像素(52,41)设置为红色

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include "tgaimage.h"
//Set color with RGB
const TGAColor white = TGAColor(255, 255, 255, 255);
const TGAColor red = TGAColor(255, 0, 0, 255);
int main(int argc, char** argv) {
//Set image size
TGAImage image(100, 100, TGAImage::RGB);
//Set pixel color
image.set(52, 41, red);
//To have the origin at the left bottom corner of the image
image.flip_vertically();
image.write_tga_file("output.tga");
return 0;
}

个人推荐的环境:Clion + CMake。(因为VsCode CMake调试功能实在搞不懂=.=)

涉及导入模型,需要将工作目录设置为工程文件夹

image-20240505230016266

image-20240505230043985

但我的Clion存在tga图像无法加载的bug。在设置->编辑器->文件类型中去掉.tga,然后选择用本地程序打开即可。

Lesson 1 Bresenham’s Line Drawing Algorithm

使用Bresenham算法绘制线段。

原理:https://en.wikipedia.org/wiki/Bresenham's_line_algorithm

实现参考:https://rosettacode.org/wiki/Bitmap/Bresenham%27s_line_algorithm#C++

建议绘制斜率小于-1,-1到0,0到1,大于1,以及水平和垂直的直线来检验算法正确性。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#include <cmath>

#include "tgaimage.h"

const TGAColor white = TGAColor(255, 255, 255, 255);
const TGAColor red = TGAColor(255, 0, 0, 255);
const TGAColor blue = TGAColor(0, 0, 255, 255);

void line(int x1, int y1, int x2, int y2, TGAImage& image, TGAColor color)
{
//Ensure that slope in (0, 1)
const bool steep = (std::abs(y2 - y1) > std::abs(x2 - x1));
if (steep) {
std::swap(x1, y1);
std::swap(x2, y2);
}
if (x1 > x2) {
std::swap(x1, x2);
std::swap(y1, y2);
}

const float dx = x2 - x1;
const float dy = fabs(y2 - y1);

float error = dx / 2.0f;
const int ystep = (y1 < y2) ? 1 : -1;
int y = (int)y1;

const int maxX = (int)x2;

for (int x = (int)x1; x <= maxX; x++) {
if (steep) {
image.set(y, x, color);
} else {
image.set(x, y, color);
}

error -= dy;
if (error < 0) {
y += ystep;
error += dx;
}
}
}

int main(int argc, char** argv)
{
TGAImage image(100, 100, TGAImage::RGB);
line(13, 20, 80, 40, image, red);
line(55, 33, 22, 66, image, blue);
line(33, 33, 66, 66, image, white);
line(44, 20, 44, 80, image, white);
line(20, 44, 80, 44, image, white);
image.flip_vertically();
image.write_tga_file("output.tga");
return 0;
}

效果

image-20240419183705501

Lesson 2: Triangle rasterization and back face culling

三维物体模型通常以三角形为基础。为了方便表示点、向量、多边形,写geometry.h。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#pragma once

template <class t> struct Vec2 {
union {
struct {t u, v;};
struct {t x, y;};
t raw[2];
};
Vec2() : u(0), v(0) {}
Vec2(t _u, t _v) : u(_u),v(_v) {}
inline Vec2<t> operator +(const Vec2<t> &V) const { return Vec2<t>(u+V.u, v+V.v); }
inline Vec2<t> operator -(const Vec2<t> &V) const { return Vec2<t>(u-V.u, v-V.v); }
inline Vec2<t> operator *(float f) const { return Vec2<t>(u*f, v*f); }
template <class > friend std::ostream& operator<<(std::ostream& s, Vec2<t>& v);
};

template <class t> struct Vec3 {
union {
struct {t x, y, z;};
struct { t ivert, iuv, inorm; };
t raw[3];
};
Vec3() : x(0), y(0), z(0) {}
Vec3(t _x, t _y, t _z) : x(_x),y(_y),z(_z) {}
inline Vec3<t> operator ^(const Vec3<t> &v) const { return Vec3<t>(y*v.z-z*v.y, z*v.x-x*v.z, x*v.y-y*v.x); }
inline Vec3<t> operator +(const Vec3<t> &v) const { return Vec3<t>(x+v.x, y+v.y, z+v.z); }
inline Vec3<t> operator -(const Vec3<t> &v) const { return Vec3<t>(x-v.x, y-v.y, z-v.z); }
inline Vec3<t> operator *(float f) const { return Vec3<t>(x*f, y*f, z*f); }
inline t operator *(const Vec3<t> &v) const { return x*v.x + y*v.y + z*v.z; }
float norm () const { return std::sqrt(x*x+y*y+z*z); }
Vec3<t> & normalize(t l=1) { *this = (*this)*(l/norm()); return *this; }
template <class > friend std::ostream& operator<<(std::ostream& s, Vec3<t>& v);
};

typedef Vec2<float> Vec2f;
typedef Vec2<int> Vec2i;
typedef Vec3<float> Vec3f;
typedef Vec3<int> Vec3i;

template <class t> std::ostream& operator<<(std::ostream& s, Vec2<t>& v) {
s << "(" << v.x << ", " << v.y << ")\n";
return s;
}

template <class t> std::ostream& operator<<(std::ostream& s, Vec3<t>& v) {
s << "(" << v.x << ", " << v.y << ", " << v.z << ")\n";
return s;
}

如何画出实心的三角形?一般来说,有扫描线和边界函数两种算法。

对于多线程的CPU,采用边界函数法更为高效:先找到三角形的矩形包围盒,再逐点判断是否在三角形中

1
2
3
4
5
6
7
8
triangle(vec2 points[3]) { 
vec2 bbox[2] = find_bounding_box(points);
for (each pixel in the bounding box) {
if (inside(points, pixel)) {
put_pixel(pixel);
}
}
}

因此,问题变成了给定三角形的三个点,如何判断点是否在三角形内部

一种最好的办法是,计算给定点关于给定三角形的重心坐标(或者叫面积坐标)。

维基百科:https://zh.wikipedia.org/wiki/%E9%87%8D%E5%BF%83%E5%9D%90%E6%A0%87

简单来说,它表示一个点所对的三条边形成的三角形面积比。如果点在三角形外部,则有一个维度是负的。

image-20240419185822971

由于tinyrenderer的作者写得有些丑陋,我在geometry.h里直接加入了polygon和triangle类,来实现重心坐标计算和点在三角形内的检测

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
template <class T>
class Polygon2D {
public:
int n;
std::vector<Vec2<T>> pt;
Polygon2D(int _n, std::vector<Vec2<T>> _pt): n(_n), pt(_pt) {}
};

template <class T>
class Triangle2D: public Polygon2D<T> {
public:
using Polygon2D<T>::pt;
Triangle2D(std::vector<Vec2<T>> _pt): Polygon2D<T>(3, _pt) {}
Vec3f baryCentric(Vec2i P) {
Vec3f u = Vec3f(pt[2][0]-pt[0][0], pt[1][0]-pt[0][0], pt[0][0]-P[0])^Vec3f(pt[2][1]-pt[0][1], pt[1][1]-pt[0][1], pt[0][1]-P[1]);
/* `pts` and `P` has integer value as coordinates
so `abs(u[2])` < 1 means `u[2]` is 0, that means
triangle is degenerate, in this case return something with negative coordinates */
if (std::abs(u.z)<1) return Vec3f(-1,1,1);
return Vec3f(1.f-(u.x+u.y)/u.z, u.y/u.z, u.x/u.z);
}

bool inInside(Vec2i P) {
auto bc = baryCentric(P);
if (bc.x<0 || bc.y<0 || bc.z<0) return false;
return true;
}
};

在main.cpp里绘制实心三角形

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
//Iterate all points in the rectangular bounding box of triangle, draw if the point is inside
void drawSolidTriangle(Triangle2D<int> tri, TGAImage &image, TGAColor color) {
Vec2i bboxmin(image.get_width()-1, image.get_height()-1);
Vec2i bboxmax(0, 0);
Vec2i clamp(image.get_width()-1, image.get_height()-1);
for (int i=0; i<3; i++) {
bboxmin.x = std::max(0, std::min(bboxmin.x, tri.pt[i].x));
bboxmin.y = std::max(0, std::min(bboxmin.y, tri.pt[i].y));

bboxmax.x = std::min(clamp.x, std::max(bboxmax.x, tri.pt[i].x));
bboxmax.y = std::min(clamp.y, std::max(bboxmax.y, tri.pt[i].y));
}
Vec2i P;
for (P.x=bboxmin.x; P.x<=bboxmax.x; P.x++) {
for (P.y=bboxmin.y; P.y<=bboxmax.y; P.y++) {
Vec3f bc_screen = tri.baryCentric(P);
if (bc_screen.x<0 || bc_screen.y<0 || bc_screen.z<0) continue;
image.set(P.x, P.y, color);
}
}
};

得到如图效果:

image-20240419195714296

三角形绘制完成后,可以尝试导入作者提供的由三角形构成的人脸模型。

.obj模型文件的格式如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# List of geometric vertices, with (x, y, z, [w]) coordinates, w is optional and defaults to 1.0.
v 0.123 0.234 0.345 1.0
v ...
...
# List of texture coordinates, in (u, [v, w]) coordinates, these will vary between 0 and 1. v, w are optional and default to 0.
vt 0.500 1 [0]
vt ...
...
# List of vertex normals in (x,y,z) form; normals might not be unit vectors.
vn 0.707 0.000 0.707
vn ...
...
# Parameter space vertices in (u, [v, w]) form; free form geometry statement (see below)
vp 0.310000 3.210000 2.100000
vp ...
...
# Polygonal face element (see below)
f 1 2 3
f 3/1 4/2 5/3
f 6/4/1 3/5/3 7/6/5
f 7//1 8//2 9//3
f ...
...
# Line element (see below)
l 5 8 1 2 4 9

目前,我们暂时不关心模型的深度(z坐标),只是将模型正投影到XY平面上,则模型上的点对应的屏幕坐标可以这样简单的计算

1
screen_coords[j] = Vec2i((v.x+1.)*width/2., (v.y+1.)*height/2.);

假设光从正前方射向正后方,即光线方向(0,0,-1)。

在这里,我们使用一种简化的亮度计算方法:我们忽略面与光源之间的距离差异,认为正对着光源的面(法线与光线方向相同)最亮,这样就可以计算每个三角形面的单位法向量与光线方向的叉积来代表亮度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
int main(int argc, char** argv) {
if (2==argc) {
model = new Model(argv[1]);
} else {
model = new Model("obj/african_head.obj");
}

TGAImage image(width, height, TGAImage::RGB);
Vec3f light_dir(0,0,-1);
int cnt = 0;
for (int i=0; i<model->nfaces(); i++) {
std::vector<int> face = model->face(i);
Vec2i screen_coords[3];
Vec3f world_coords[3];
for (int j=0; j<3; j++) {
Vec3f v = model->vert(face[j]);
screen_coords[j] = Vec2i((v.x+1.)*width/2., (v.y+1.)*height/2.);
world_coords[j] = v;
}
Vec3f n = (world_coords[2]-world_coords[0])^(world_coords[1]-world_coords[0]);
n.normalize();
float intensity = n*light_dir;
if (intensity>0) {
printf("ok %d\n", ++cnt);
drawSolidTriangle(Triangle2D<int>({screen_coords[0], screen_coords[1], screen_coords[2]}), image, TGAColor(intensity*255, intensity*255, intensity*255, 255));
}
}

image.flip_vertically();
image.write_tga_file("output.tga");
delete model;
return 0;
}

在这种简化下,得到的渲染结果如下:
image-20240419205641217

可以发现,位于口腔中的三角形遮住了嘴唇。下一节课中,我们将考虑深度测试,正确处理多边形的遮挡关系。

Lesson 3: Z Buffer

深度检测算法的基本原理是,引入一个大小为像素数量的Z-Buffer数组,初始化所有像素点深度为负无穷。

在遍历像素点时,比较当前三角形上点的深度是否小于Z-Buffer的数值,如果小于,则更新该像素并更新Z-Buffer。

为此,我们需要为屏幕坐标增加一维深度(对于上面的人脸设置为模型的z即可)。在drawSolidTriangle()中增加对深度缓冲区的判断。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
//Iterate all points in the rectangular bounding box of triangle, draw if the point is inside
// 2024 04 26 2d->3d
void drawSolidTriangle(Triangle2D<float> tri, TGAImage &image, TGAColor color, float *zbuffer) {
Vec2f bboxmin(image.get_width()-1, image.get_height()-1);
Vec2f bboxmax(0, 0);
Vec2f clamp(image.get_width()-1, image.get_height()-1);
for (int i=0; i<3; i++) {
bboxmin.x = std::max((float)0, std::min(bboxmin.x, tri.pt[i].x));
bboxmin.y = std::max((float)0, std::min(bboxmin.y, tri.pt[i].y));

bboxmax.x = std::min(clamp.x, std::max(bboxmax.x, tri.pt[i].x));
bboxmax.y = std::min(clamp.y, std::max(bboxmax.y, tri.pt[i].y));
}
Vec3i P;
for (P.x=bboxmin.x; P.x<=bboxmax.x; P.x++) {
for (P.y=bboxmin.y; P.y<=bboxmax.y; P.y++) {
Vec3f bc_screen = tri.baryCentric(P.toVec2());//toTriangle2D().baryCentric(P);
if (bc_screen.x<0 || bc_screen.y<0 || bc_screen.z<0) continue;

//bugfix
P.z = tri.depth[0] * bc.x + tri.depth[1] * bc.y + tri.depth[2] * bc.z;

int idx = P.x+P.y*width;
if (zbuffer[idx]<P.z) {
zbuffer[idx] = P.z;
image.set(P.x, P.y, color);
}
}
}
};




Vec3f worldToScreen(Vec3f v) {
return Vec3f(int((v.x+1.)*width/2.+.5), int((v.y+1.)*height/2.+.5), v.z);
}

int main(int argc, char** argv) {
if (2==argc) {
model = new Model(argv[1]);
} else {
model = new Model("obj/african_head.obj");
}

float *zbuffer = new float[width * height];
for (int i=width*height; i--; zbuffer[i] = -std::numeric_limits<float>::max());


TGAImage image(width, height, TGAImage::RGB);
Vec3f light_dir(0,0,-1);
int cnt = 0;
for (int i=0; i<model->nfaces(); i++) {
std::vector<int> face = model->face(i);
Vec3f screen_coords[3];
Vec3f world_coords[3];
for (int j=0; j<3; j++) {
Vec3f v = model->vert(face[j]);
//screen_coords[j] = Vec2i((v.x+1.)*width/2., (v.y+1.)*height/2.);
world_coords[j] = v;
screen_coords[j] = worldToScreen(v);
}

Vec3f n = (world_coords[2]-world_coords[0])^(world_coords[1]-world_coords[0]);
n.normalize();
float intensity = n*light_dir;

if (intensity>0) {
printf("ok %d\n", ++cnt);
drawSolidTriangle(Triangle2D<float>({screen_coords[0], screen_coords[1], screen_coords[2]}), image, TGAColor(intensity*255, intensity*255, intensity*255, 255), zbuffer);
}
}

image.flip_vertically();
image.write_tga_file("output.tga");
delete model;
return 0;
}

同时,在Triangle2D类中加入depth数组即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
template <class T>
class Triangle2D: public Polygon2D<T> {
public:
using Polygon2D<T>::pt;
std::vector<T> depth;

Triangle2D(std::vector<Vec2<T>> _pt, std::vector<Vec2<T>> _depth = {0, 0, 0}):
Polygon2D<T>(3, _pt),
depth(_depth) {}

Triangle2D(std::vector<Vec3<float>> _pt):
Polygon2D<T>(3, {_pt[0].toVec2(), _pt[1].toVec2(), _pt[2].toVec2()}),
depth({_pt[0].z, _pt[1].z, _pt[2].z}) {}

Vec3f baryCentric(Vec2f P) {
Vec3f u = Vec3f(pt[2][0]-pt[0][0], pt[1][0]-pt[0][0], pt[0][0]-P[0])^Vec3f(pt[2][1]-pt[0][1], pt[1][1]-pt[0][1], pt[0][1]-P[1]);
/* `pts` and `P` has integer value as coordinates
so `abs(u[2])` < 1 means `u[2]` is 0, that means
triangle is degenerate, in this case return something with negative coordinates */
if (std::abs(u.z)<1) return Vec3f(-1,1,1);
return Vec3f(1.f-(u.x+u.y)/u.z, u.y/u.z, u.x/u.z);
}

bool inInside(Vec2i P) {
auto bc = baryCentric(P);
if (bc.x<0 || bc.y<0 || bc.z<0) return false;
return true;
}
};



效果如图所示:
image-20240426175739547

Bouns: Texture Mapping

在.obj文件中,有以“vt u v”开头的行,它们给出了一个纹理坐标数组。

The number in the middle (between the slashes) in the facet lines “f x/x/x x/x/x x/x/x” are the texture coordinates of this vertex of this triangle. Interpolate it inside the triangle, multiply by the width-height of the texture image and you will get the color to put in your render.

tinyrender作者提供了漫反射纹理: african_head_diffuse.tga

据此,我们可以给上述人脸模型添加纹理。此时,main函数中drawSolidTriangle函数里不需要再传入颜色,只需要传入intensity即可,另外需要传入当前三角形三个点的纹理坐标uv。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
//Iterate all points in the rectangular bounding box of triangle, draw if the point is inside
// 2024 04 26 2d->3d, texture mapping
void drawSolidTriangle(Triangle2D<float> tri, Vec2i* uv, TGAImage &image, float intensity, float *zbuffer) {
Vec2f bboxmin(image.get_width()-1, image.get_height()-1);
Vec2f bboxmax(0, 0);
Vec2f clamp(image.get_width()-1, image.get_height()-1);
for (int i=0; i<3; i++) {
bboxmin.x = std::max((float)0, std::min(bboxmin.x, tri.pt[i].x));
bboxmin.y = std::max((float)0, std::min(bboxmin.y, tri.pt[i].y));

bboxmax.x = std::min(clamp.x, std::max(bboxmax.x, tri.pt[i].x));
bboxmax.y = std::min(clamp.y, std::max(bboxmax.y, tri.pt[i].y));
}
Vec3i P;
for (P.x=bboxmin.x; P.x<=bboxmax.x; P.x++) {
for (P.y=bboxmin.y; P.y<=bboxmax.y; P.y++) {
Vec3f bc = tri.baryCentric(P.toVec2());//toTriangle2D().baryCentric(P);
if (bc.x<0 || bc.y<0 || bc.z<0) continue;

P.z = tri.depth[0] * bc.x + tri.depth[1] * bc.y + tri.depth[2] * bc.z;

int idx = P.x+P.y*width;
if (zbuffer[idx]<P.z) {
zbuffer[idx] = P.z;

Vec2i P_uv = uv[0] * bc.x + uv[1] * bc.y + uv[2] * bc.z;
TGAColor color = model->diffuse(P_uv);
image.set(P.x, P.y, color);
}
}
}
};




Vec3f worldToScreen(Vec3f v) {
return Vec3f(int((v.x+1.)*width/2.+.5), int((v.y+1.)*height/2.+.5), v.z);
}

int main(int argc, char** argv) {
if (2==argc) {
model = new Model(argv[1]);
} else {
model = new Model("obj/african_head.obj");
}

float *zbuffer = new float[width * height];
for (int i=width*height; i--; zbuffer[i] = -std::numeric_limits<float>::max());


TGAImage image(width, height, TGAImage::RGB);
Vec3f light_dir(0,0,-1);
int cnt = 0;
for (int i=0; i<model->nfaces(); i++) {
std::vector<int> face = model->face(i);
Vec3f screen_coords[3];
Vec3f world_coords[3];
for (int j=0; j<3; j++) {
Vec3f v = model->vert(face[j]);
//screen_coords[j] = Vec2i((v.x+1.)*width/2., (v.y+1.)*height/2.);
world_coords[j] = v;
screen_coords[j] = worldToScreen(v);
}

Vec3f n = (world_coords[2]-world_coords[0])^(world_coords[1]-world_coords[0]);
n.normalize();
float intensity = n*light_dir;

if (intensity>0) {
printf("ok %d\n", ++cnt);
Vec2i uv[3];
for (int j = 0; j < 3; j++) uv[j] = model->uv(i, j);
drawSolidTriangle(Triangle2D<float>({screen_coords[0], screen_coords[1], screen_coords[2]}), uv, image, intensity, zbuffer);
}
}

image.flip_vertically();
image.write_tga_file("output.tga");
delete model;
return 0;
}

model.h和model.cpp需要修改以支持纹理。作者在lesson4的结尾放出了代码。

效果:

image-20240426190246539

这是一个平行投影的结果,损失了一部分真实感,例如,虽然耳朵旁边的头发在xoy平面上不与脸部重叠,但实际上应该被前边的皮肤遮挡,因为人眼/相机本身是“点光源”,而不是“平行光源”,物体发出的光线最终汇聚于一点,也就是所谓的“透视”。下面将引入透视投影:

Lesson 4: Perspective projection

齐次坐标

img

简单变换(图来自GAMES101)

image-20240505174927834

逆变换

image-20240505175454080

复合变换

image-20240505175019922

实现矩阵类:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
const int DEFAULT_D = 4;
class Matrix {
std::vector<std::vector<float>> m;
int nrow, ncol;
public:
Matrix(int r=DEFAULT_D, int c=DEFAULT_D) :
m(std::vector<std::vector<float>> (r, std::vector<float>(c, 0.f))),
nrow(r), ncol(c) {}

int get_nrow() { return nrow; }
int get_ncol() { return ncol; }

static Matrix identity(int dimensions) {
Matrix E(dimensions, dimensions);
for (int i = 0; i < dimensions; i++)
E[i][i] = 1;
return E;
}

std::vector<float>& operator[](const int i) {
assert(i >= 0 && i < nrow);
return m[i];
}

const std::vector<float>& operator[](const int i) const {
assert(i >= 0 && i < nrow);
return m[i];
}

Matrix operator*(const Matrix& a) {
assert(this->ncol == a.nrow);
Matrix res(this->nrow, a.ncol);
for (int i = 0; i < this->nrow; i++) {
for (int j = 0; j < a.ncol; j++) {
res.m[i][j] = 0;
for (int k = 0; k < this->ncol; k++)
res.m[i][j] += this->m[i][k]*a.m[k][j];
}
}
return res;
}

Matrix transpose() {
Matrix res(ncol, nrow);
for (int i = 0; i < ncol; i++)
for (int j = 0; j < nrow; j++)
res.m[i][j] = m[j][i];
return res;
}
Matrix inverse() {
assert(nrow==ncol);
// augmenting the square matrix with the identity matrix of the same dimensions a => [ai]
Matrix result(nrow, ncol*2);
for(int i=0; i<nrow; i++)
for(int j=0; j<ncol; j++)
result[i][j] = m[i][j];
for(int i=0; i<nrow; i++)
result[i][i+ncol] = 1;
// first pass
for (int i=0; i<nrow-1; i++) {
// normalize the first row
for(int j=result.ncol-1; j>=0; j--)
result[i][j] /= result[i][i];
for (int k=i+1; k<nrow; k++) {
float coeff = result[k][i];
for (int j=0; j<result.ncol; j++) {
result[k][j] -= result[i][j]*coeff;
}
}
}
// normalize the last row
for(int j=result.ncol-1; j>=nrow-1; j--)
result[nrow-1][j] /= result[nrow-1][nrow-1];
// second pass
for (int i=nrow-1; i>0; i--) {
for (int k=i-1; k>=0; k--) {
float coeff = result[k][i];
for (int j=0; j<result.ncol; j++) {
result[k][j] -= result[i][j]*coeff;
}
}
}
// cut the identity matrix back
Matrix truncate(nrow, ncol);
for(int i=0; i<nrow; i++)
for(int j=0; j<ncol; j++)
truncate[i][j] = result[i][j+ncol];
return truncate;
}

friend std::ostream& operator<<(std::ostream& s, Matrix& m);
};

inline std::ostream& operator<<(std::ostream& s, Matrix& m) {
for (int i = 0; i < m.nrow; i++) {
for (int j = 0; j < m.ncol; j++) {
s << m[i][j];
if (j<m.ncol-1) s << "\t";
}
s << "\n";
}
return s;
}

一个简单投影矩阵的推导:

假设相机位置为(0,0,c)成像平面为z=0,如图

img

根据三角形相似,x’/c = x/(c-z),即有

img

同理

img

为了实现z轴方向上靠近相机的线段被拉伸,远离相机的线段被压缩,投影矩阵具有这样的形式

img

根据齐次坐标的结果,得到对应的投影点坐标

img

根据上面的结果,可知r=-1/c。

我们可以得到一个简单情况下的投影矩阵,变换过程如图

img

在程序中,这个过程用如下方式实现:

1
screen_coords[j] = hc2v(viewportMatrix * projectionMatrix * v2hc(v));

(普通坐标 → 齐次坐标)

世界坐标 → (经投影变换)投影坐标 → (经视口变换)屏幕坐标

(齐次坐标 → 普通坐标)

这里的坐标包含位置(x,y)和深度z,深度交给z-buffer来处理

视口变化的目的是将投影区域映射到[-1,1]^3的立方体中,便于绘制

相关变化的实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
//Transition between coordinates (vector type) and homogeneous coordinates (matrix type)
Matrix v2hc(const Vec3f &v) {
Matrix hc(4, 1);
hc[0][0] = v.x;
hc[1][0] = v.y;
hc[2][0] = v.z;
hc[3][0] = 1;
return hc;
}
Vec3f hc2v(const Matrix &hc) {
return Vec3f(hc[0][0], hc[1][0], hc[2][0]) * (1.f / hc[3][0]);
}

Vec3f light_dir(0,0,-1);
Vec3f camera(0, 0, 3);
//project to z = 0
Matrix projection(const Vec3f &camera) {
Matrix m = Matrix::identity(4);
m[3][2] = -1.f/camera.z;
return m;
}

//viewport(width / 8, height / 8, width * 0.75, height * 0.75);
//窗口边缘留出1/8空隙
Matrix viewport(int x, int y, int w, int h) {
Matrix m = Matrix::identity(4);
//Translation
m[0][3] = x + w / 2.f;
m[1][3] = y + h / 2.f;
m[2][3] = depth / 2.f;
//scale to [0, 1]
m[0][0] = w / 2.f;
m[1][1] = h / 2.f;
m[2][2] = depth / 2.f;
return m;
}

int main() {
...

Matrix projectionMatrix = projection(camera);
Matrix viewportMatrix = viewport(width / 8, height / 8, width * 0.75, height * 0.75);

...

for (int i=0; i<model->nfaces(); i++) {
std::vector<int> face = model->face(i);
Vec3f screen_coords[3];
Vec3f world_coords[3];
for (int j=0; j<3; j++) {
Vec3f v = model->vert(face[j]);
//world -> screen:
//3d coordinate -> homogeneous coordinates
//-> projection trans(camera at (0,0,c), project to plane z = 0)
//-> viewport trans(to make central at (w/2,h/2,d/2))

world_coords[j] = v;
screen_coords[j] = hc2v(viewportMatrix * projectionMatrix * v2hc(v));
}

//Still simplified light intensity
Vec3f n = (world_coords[2]-world_coords[0])^(world_coords[1]-world_coords[0]);
n.normalize();
float intensity = n*light_dir;

if (intensity>0) {
printf("ok %d\n", ++cnt);
Vec2i uv[3];
for (int j = 0; j < 3; j++) uv[j] = model->uv(i, j);
drawSolidTriangle(Triangle2D<float>({screen_coords[0], screen_coords[1], screen_coords[2]}), uv, image, intensity, zbuffer);
}
}
...
}

效果

image-20240504170937200

注:TinyRenderer的透视投影与GAMES101处理方式不同,GAMES101是把M[3][2]固定为1,求解M的第三行,而此处是固定第三行为(0 0 1 0),求解M[3][2]。

此处并没有“近平面”的概念,认为n=0,f=c。

下面是GAMES101给出的结果(第三行为0 0 A B):

image-20240505192224927 image-20240505192257807

Lesson 5: Moving the camera

之前,我们考虑了相机在(0,0,c),朝着-z方向看的情况。

对于任意的相机位置,需要三个向量来确定:相机坐标e,相机指向的点c,向上方向向量u,如图所示:

img

我们假定相机总是朝着-z方向看,而u朝向正y方向,据此就得到了一个新的坐标系x’y’z’,

下面考虑如何将物体坐标[x,y,z]转化为新坐标系下的[x’,y’,z’]。

img

首先回顾坐标[x,y,z]的定义,它是三个正交的单位向量i,j,k前面的系数

img

现在,我们有了新的单位向量i’,j’,k’,那么一定存在矩阵M,使得

img

我们将OP写成OO’+O’P,与新的单位坐标建立联系:

img

将[i’,j’,k’]用上面的式子表示,提出[i,j,k]:

img

左边用[x,y,z]的定义式替换,就得到了[x’,y’,z’]与[x,y,z]的关系

img

关于look at的推导,此处写的有些混乱
建议参阅https://www.zhihu.com/question/447781866

下面是个人理解:

简单来说,设M是(0, 0, 0),[i,j,k]到eyepos, [i’,j’,k’]的变换矩阵
则M=TR,先旋转后平移

其中旋转矩阵R根据单位向量左乘该矩阵得到新单位向量,很容易得到(此处r,u,v是i’,j’,k’在原坐标系下的坐标)

image-20240505220527743

而T则为原点平移到eye pos的平移矩阵 (C是eyepos)

image-20240505221319762

此为对坐标轴的变换矩阵,即,我们用M计算了新的单位向量在原坐标系下的坐标,而要得到原来单位向量在新坐标系下的坐标,显然应该左乘M的逆矩阵。这样,我们就求得了ModelView矩阵。

据此,编写lookup实现modelview的计算

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Vec3f light_dir = Vec3f(0, 0, -1).normalize();
Vec3f eye(1, 1, 3);
Vec3f center(0, 0, 0);
Vec3f up(0, 1, 0);
//Vec3f camera(0, 0, 3);

//screen_coordinate = viewport * projection * modelview * world_coordinate
Matrix lookat(Vec3f eye, Vec3f center, Vec3f up) {
Vec3f z = (eye - center).normalize();
Vec3f x = (up ^ z).normalize();
Vec3f y = (z ^ x).normalize();
Matrix M_inv = Matrix::identity(4);
Matrix T = Matrix::identity(4);
//thanks https://www.zhihu.com/question/447781866
for (int i = 0; i < 3; i++) {
M_inv[0][i] = x[i];
M_inv[1][i] = y[i];
M_inv[2][i] = z[i];
T[i][3] = -eye[i];
}
return M_inv * T;
}


Matrix projection(Vec3f eye, Vec3f center) {
Matrix m = Matrix::identity(4);
m[3][2] = -1.f / (eye - center).norm();
//m[3][2] = -1.f / camera.z;
return m;
}


int main() {
...

Matrix modelviewMatrix = lookat(eye, center, up);
Matrix projectionMatrix = projection(eye, center);
Matrix viewportMatrix = viewport(width / 8, height / 8, width * 0.75, height * 0.75);

...
screen_coords[j] = hc2v(viewportMatrix * projectionMatrix * modelviewMatrix * v2hc(v));
...

}

效果 目前有点bug

image-20240505225818988

Bouns:Transformation of normal vectors

为了处理光照,我们将模型进行坐标变换后,如果模型提供了每个面的法向量,还需要将法向量也进行变换。

此处有一个结论:模型上的坐标通过矩阵M进行仿射变换,那么模型的法向量的变换矩阵是M的逆矩阵的转置。

证明:考虑平面方程 Ax+By+Cz=0,它的法向量是(A,B,C) ,写成矩阵形式为:

img

在两者之间插入M的逆和M:

img

由于坐标均为列向量,把左边写成转置形式:

img

因此,如果对坐标(x,y,z)做变换M,要满足原来的直线方程,对法向量的变换矩阵为M的逆矩阵的转置(或者转置再求逆,转置和求逆是可交换的,证明略)

Lesson 6: Shaders

本节主要分为两大部分:重构代码,实现不同的shaders。

再尝试用自己之前的屎山适配Shader部分后,我放弃了,直接使用作者写的geometry。内容大部分都很直观,值得注意的是

1
2
3
4
5
6
7
8
9
10
11
template<size_t LEN,size_t DIM,typename T> vec<LEN,T> embed(const vec<DIM,T> &v, T fill=1) {
vec<LEN,T> ret;
for (size_t i=LEN; i--; ret[i]=(i<DIM?v[i]:fill));
return ret;
}

template<size_t LEN,size_t DIM, typename T> vec<LEN,T> proj(const vec<DIM,T> &v) {
vec<LEN,T> ret;
for (size_t i=LEN; i--; ret[i]=v[i]);
return ret;
}

这两个模板的作用,分别是将低维向量拓展到高维(不足补1)、高维向量投影到低维(截取前LEN个坐标),在涉及其次坐标和普通坐标的转换时多次用到。

img

Shader包含顶点着色和片元着色两个部件,将其抽离出来,可以使得我们通过修改Shader即可实现各种不同的渲染效果,而无需改动其他代码。

在gl.h中,定义Shader的基本结构。不同Shader的通过继承基类重写两个虚函数来实现

1
2
3
4
5
struct IShader {
virtual ~IShader() {}
virtual Vec4f vertex(int iface, int nthvert) = 0;
virtual bool fragment(Vec3f bar, TGAColor &color) = 0;
};

其中,iface是面的编号,而nthvert是顶点编号(对于三角形为0,1,2)。

例如,一个简单的GouraudShader,vertex通过顶点法向量与光照的点乘计算三角形每个顶点的光照,而fragment通过重心坐标插值计算三角形区域中所有像素的颜色。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
struct GouraudShader : public IShader {
Vec3f varying_intensity;
//顶点着色
virtual Vec4f vertex(int iface, int nthvert) {
Vec4f glVertex = embed<4>(model->vert(iface, nthvert));
glVertex = Viewport * Projection * ModelView * glVertex;
//
varying_intensity[nthvert] = std::max(0.f, model->normal(iface, nthvert) * light_dir);
return glVertex;
}
//片段着色 用于drawTriangle
//这里的bar即baryCentric
virtual bool fragment(Vec3f bar, TGAColor &color) {
float intensity = varying_intensity * bar;
color = TGAColor(255, 255, 255) * intensity;
return false; //返回值表示是否丢弃
}
};

//in main for every vertex
GouraudShader shader;
for (int i=0; i<model->nfaces(); i++) {
std::vector<int> face = model->face(i);
Vec3f world_coords[3];
Vec4f screen_coords[3];
for (int j = 0; j < 3; j++) {
Vec3f v = model->vert(face[j]);
world_coords[j] = v;
screen_coords[j] = shader.vertex(i, j);
}
drawTriangle(screen_coords, shader, image, zbuffer);
}
// in drawTriangle, for every pixel
...
TGAColor color;
bool discard = shader.fragment(bc, color);
if (!discard) {
zbuffer.set(P.x, P.y, TGAColor(frag_depth));
image.set(P.x, P.y, color);
}
...

所得的效果如图

image-20240512211917138

我们可以轻松地修改着色器,实现不同的渲染效果,如将颜色设置为6个梯度的橙色:

1
2
3
4
5
6
7
8
9
10
11
virtual bool fragment(Vec3f bar, TGAColor &color) {
float intensity = varying_intensity*bar;
if (intensity>.85) intensity = 1;
else if (intensity>.60) intensity = .80;
else if (intensity>.45) intensity = .60;
else if (intensity>.30) intensity = .45;
else if (intensity>.15) intensity = .30;
else intensity = 0;
color = TGAColor(255, 155, 0)*intensity;
return false;
}

效果:

image-20240512212544885

纹理着色器

接下来,我们可以实现漫反射纹理。只需要修改Shader添加纹理映射项即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
struct TextureShader : public IShader {
Vec3f varying_intensity; // written by vertex shader, read by fragment shader
mat<2,3,float> varying_uv; // same as above

virtual Vec4f vertex(int iface, int nthvert) {
varying_uv.set_col(nthvert, model->uv(iface, nthvert));
varying_intensity[nthvert] = std::max(0.f, model->normal(iface, nthvert)*light_dir); // get diffuse lighting intensity
Vec4f gl_Vertex = embed<4>(model->vert(iface, nthvert)); // read the vertex from .obj file
return Viewport*Projection*ModelView*gl_Vertex; // transform it to screen coordinates
}

virtual bool fragment(Vec3f bar, TGAColor &color) {
float intensity = varying_intensity*bar; // interpolate intensity for the current pixel
Vec2f uv = varying_uv*bar; // interpolate uv for the current pixel
color = model->diffuse(uv)*intensity; // well duh
return false; // no, we do not discard this pixel
}
};

效果

image-20240512212807641

法线着色器

事实上,纹理图像中不止可以储存颜色,还可以储存法线方向、温度等等信息。通过纹理给出每个点的法线方向,就能实现表明的凹凸起伏效果。此时,纹理图像的RGB值不再储存颜色,而是用于储存法线,如下图。

img

上节课的结尾,我们提到了“模型上的坐标通过矩阵M进行仿射变换,那么模型的法向量的变换矩阵是M的逆矩阵的转置”这一结论,根据这个结论,就可以直接在顶点着色器中分别计算顶点和法向量经过投影后的结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
struct NormalShader : public IShader {
mat<2,3,float> varying_uv; // same as above
mat<4,4,float> uniform_M; // Projection*ModelView
mat<4,4,float> uniform_MIT; // (Projection*ModelView).invert_transpose()

virtual Vec4f vertex(int iface, int nthvert) {
varying_uv.set_col(nthvert, model->uv(iface, nthvert));
Vec4f gl_Vertex = embed<4>(model->vert(iface, nthvert)); // read the vertex from .obj file
return Viewport*Projection*ModelView*gl_Vertex; // transform it to screen coordinates
}

virtual bool fragment(Vec3f bar, TGAColor &color) {
Vec2f uv = varying_uv*bar; // interpolate uv for the current pixel
Vec3f n = proj<3>(uniform_MIT*embed<4>(model->normal(uv))).normalize();
Vec3f l = proj<3>(uniform_M *embed<4>(light_dir )).normalize();
float intensity = std::max(0.f, n*l);
color = model->diffuse(uv)*intensity; // well duh
return false; // no, we do not discard this pixel
}
};

效果:

image-20240512214259167

Phone模型着色器

img

根据Phone光照模型,物体的真实光照可以近似为环境光+漫反射+高光。据此,我们可以进一步得出更加真实的着色器。

高光的计算如图所示:

img

已知物体表明法向量为n,入射光为l,两者夹角为a,假设所有向量都被归一化,设反射光为r,则有l+r=2n cosa ,可求得反射光r=2n cosa - l = 2n(n·l)-l。反射光

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
struct PhoneShader : public IShader {
mat<2,3,float> varying_uv; // same as above
mat<4,4,float> uniform_M; // Projection*ModelView
mat<4,4,float> uniform_MIT; // (Projection*ModelView).invert_transpose()

virtual Vec4f vertex(int iface, int nthvert) {
varying_uv.set_col(nthvert, model->uv(iface, nthvert));
Vec4f gl_Vertex = embed<4>(model->vert(iface, nthvert)); // read the vertex from .obj file
return Viewport*Projection*ModelView*gl_Vertex; // transform it to screen coordinates
}

virtual bool fragment(Vec3f bar, TGAColor &color) {
Vec2f uv = varying_uv*bar;
Vec3f n = proj<3>(uniform_MIT*embed<4>(model->normal(uv))).normalize();
Vec3f l = proj<3>(uniform_M *embed<4>(light_dir )).normalize();
Vec3f r = (n*(n*l*2.f) - l).normalize(); // reflected light
float spec = pow(std::max(r.z, 0.0f), model->specular(uv));
float diff = std::max(0.f, n*l);
TGAColor c = model->diffuse(uv);
color = c;
for (int i=0; i<3; i++) color[i] = std::min<float>(5 + c[i]*(diff + .6*spec), 255);
return false;
}
};

按照环境光5+自身颜色*(1漫反射+0.6高光),得到的效果如下

image-20240512220509696

我们可以试试其他的配比系数,如10 + c[i]*(2 * diff + 1.5*spec

image-20240512221918507

另外,我们还可以到master分支找其他的模型,尝试渲染效果

image-20240512221509093

image-20240512222416047

GAMES101 作业7 路径追踪 踩坑指南

Posted on 2024-04-16 |

首先回顾路径追踪的原理,如下图

1713229011326

基本思想

wo是射向眼镜(相机)的光线,包含来自光源的直接光照ws,来自其他物体的间接光照wi两部分。

在实现path tracing时,我们考虑的是黄色线的方向,即光线从相机射向p点(实际上是从p点射向相机),然后通过多次随机采样从p点射出(实际上是射向p点)的光线得到该像素点的真实颜色。

为了提高效率,将射向p的光线分为ws(光源)和wi(其他物体)计算。由于wi、ws分开计算,因此如果ws被物体挡住,或者wi打到光源均不计算。

wi需要递归计算,通过神奇的Russian Roulette在减少递归层数的同时保持光照的期望不变。

然后按照作业指南上的伪代码写就可以了

1713230279835

注意事项

  • 右墙壁发黑:检查Bound3::IntersectP, return t_enter <= t_exit && t_exit >= 0; 就可以
  • 小正方体右上角有三角形黑块:检查Triangle::getIntersectionin Triangle.hpp,当时间小于0时不能判定为相交
  • 多线程:注意framebuffer的下标应该由m改为直接用i和j计算。CMakeLists.txt加一行 TARGET_LINK_LIBRARIES(RayTracing pthread)就好。

代码

多线程优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
// change the spp value to change sample ammount
int spp = 32; // default:16
std::cout << "SPP: " << spp << "\n";

// for (uint32_t j = 0; j < scene.height; ++j) {
// for (uint32_t i = 0; i < scene.width; ++i) {
// // generate primary ray direction
// float x = (2 * (i + 0.5) / (float)scene.width - 1) *
// imageAspectRatio * scale;
// float y = (1 - 2 * (j + 0.5) / (float)scene.height) * scale;

// Vector3f dir = normalize(Vector3f(-x, y, 1));
// for (int k = 0; k < spp; k++){
// framebuffer[m] += scene.castRay(Ray(eye_pos, dir), 0) / spp;
// }
// m++;
// }
// UpdateProgress(j / (float)scene.height);
// }
// UpdateProgress(1.f);

const int thread_cnt = 12;
int finished_thread = 0;
int finished_width = 0;
std::mutex mtx;

printf("%d %d\n", scene.height, scene.width);
auto multiThreadCastRay = [&](uint32_t y_min, uint32_t y_max)
{
printf("start %d %d\n", y_min, y_max);
for (uint32_t j = y_min; j <= y_max; ++j) {
for (uint32_t i = 0; i < scene.width; ++i) {
// generate primary ray direction
float x = (2 * (i + 0.5) / (float)scene.width - 1) *
imageAspectRatio * scale;
float y = (1 - 2 * (j + 0.5) / (float)scene.height) * scale;

Vector3f dir = normalize(Vector3f(-x, y, 1));
for (int k = 0; k < spp; k++) {
framebuffer[scene.width * j + i] += scene.castRay(Ray(eye_pos, dir), 0) / spp;
}
}
//printf("%d\n", j);
//UpdateProgress(j / (float)scene.height);
mtx.lock();
UpdateProgress(++finished_width * 1.0 / scene.width);
mtx.unlock();
}
printf("ok %d %d\n", y_min, y_max);
};
int block = scene.height / thread_cnt + (scene.height % thread_cnt != 0);
std::thread th[thread_cnt];
for (int i = 0; i < thread_cnt; i++) {
th[i] = std::thread(multiThreadCastRay, i * block, std::min((i + 1) * block - 1, scene.height));
}
for (int i = 0; i < thread_cnt; i++) th[i].join();
UpdateProgress(1.0);

路径追踪

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
// Implementation of Path Tracing
Vector3f Scene::castRay(const Ray &ray, int depth) const
{
// TO DO Implement Path Tracing Algorithm here
/*
shade(p, wo)
sampleLight(inter , pdf_light)
Get x, ws, NN, emit from inter
Shoot a ray from p to x
If the ray is not blocked in the middle
L_dir = emit * eval(wo, ws, N) * dot(ws, N) * dot(ws,
NN) / |x-p|^2 / pdf_light


L_indir = 0.0
//Test Russian Roulette with probability RussianRoulette
wi = sample(wo, N)
Trace a ray r(p, wi)
If ray r hit a non-emitting object at q
L_indir = shade(q, wi) * eval(wo, wi, N) * dot(wi, N)
/ pdf(wo, wi, N) / RussianRoulette

Return L_dir + L_indir
*/

Vector3f L_dir(0, 0, 0), L_indir(0, 0, 0);
//ray wo is screen to p, now find p and see if already hit light
Ray wo = ray;
Intersection p_inter = this->intersect(wo);
//if hit nothing
if (!p_inter.happened) return L_dir;
//if hit light source
if (p_inter.m->hasEmission()) return p_inter.m->getEmission();

//otherwise, it hit a object

//sampleLight(inter , pdf_light)
//uniformly sample x from all LIGHTS and get its pdf
Intersection x_inter; float x_pdf;
sampleLight(x_inter, x_pdf);

//Get x, ws, Nx, emit from inter
//ws is from p to x(light), Np is at p, Nx is at x(light)
Vector3f p = p_inter.coords;
Vector3f x = x_inter.coords;
Vector3f Np = p_inter.normal;
Vector3f Nx = x_inter.normal;
Vector3f emit = x_inter.emit;

//Shoot a ray (ws) from p to x(light)
Vector3f ws_dir = (x - p).normalized();
Ray ws(p, ws_dir);
Intersection ws_inter = this->intersect(ws);

// If the ray is NOT blocked in the middle
// L_dir = emit * eval(wo, ws, N) * dot(ws, N) * dot(ws,
// NN) / |x-p|^2 / pdf_light
// Else L_dir = 0.0

//calc length of p - x and ws_inter to see if it is blocked
float px_dis = (x - p).norm(), ws_dis = ws_inter.distance;
if (px_dis - ws_dis < 0.001) {
L_dir = emit
* p_inter.m->eval(wo.direction, ws.direction, Np)
* dotProduct(ws.direction, Np) //all vectors were nomorlized
* dotProduct(-ws.direction, Nx) //so dot product is cosine
/ pow(px_dis, 2)
/ x_pdf;
} // else L_dir = 0; no need

// Now calculate L_indir
// Test Russian Roulette with probability RussianRoulette
float P_rand = get_random_float();
if (P_rand < RussianRoulette) {
//wi = sample(wo, N)
//wi is from p to q
Vector3f wi_dir = p_inter.m->sample(wo.direction, Np).normalized();
Ray wi(p_inter.coords, wi_dir);
// Trace a ray r(p, wi)
// If ray r hit a non-emitting object at q
// L_indir = shade(q, wi) * eval(wo, wi, N) * dot(wi, N)
// / pdf(wo, wi, N) / RussianRoulette
Intersection wi_inter = this->intersect(wi);
if (wi_inter.happened && !(wi_inter.m->hasEmission())) {
L_indir = castRay(wi, depth + 1)
* p_inter.m->eval(wo.direction, wi.direction, Np)
* dotProduct(wi.direction, Np)
/ p_inter.m->pdf(wo.direction, wi.direction, Np)
/ RussianRoulette;
}
}
return L_dir + L_indir;
}

hello world nya

Posted on 2024-04-14 |

qwq

qwq

qwq

qwq

Hello World

Posted on 2024-04-14 |

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
$ hexo new "My New Post"

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment

5 posts
1 tags
Links
  • Atri
  • vv123
© 2024 Wei Xianghan
Powered by Hexo
|
Theme — NexT.Pisces v5.1.4