s1eep123's blog.

KVM架构原理-CPU虚拟化

Word count: 4.4kReading time: 19 min
2023/07/01

KVM架构原理-CPU虚拟化

目前多数云厂商云服务器是采用xen和kvm进行虚拟化实现。且由于kvm的性能可以和pv媲美,天生支持硬件虚拟化,且在linux2.6.20被合并到linux kernel主流分支,维护成本较低等原因,所以大部分都是由kvm实现的。

学习kvm需理解基本术语如下:

KVM:Kernel-based Virtual Machine 基于Linux内核的虚拟机,位于ring 0

VMM: Virtual Machine Monitor 虚拟机监控层,也就是KVM的内核模块

VM: Virtual Machine 由VMM创建的虚拟化硬件平台

Guest OS: 运行在VM上的操作系统

Host OS: 运行在物理硬件上的操作系统

虚拟化漏洞:在降低特权级的情况下,部分敏感指令仍无法被VMM截获

全虚拟化:Guest OS无需作任何修改即可在VM上运行.

半虚拟化:需要修改系统内核才能实现虚拟化,相比全虚拟化麻烦

何为虚拟化:

在以前硬件不支持虚拟化时,需通过软件模拟来实现虚拟化,通过软件仿真出每一条cpu指令后执行,缺点是效率极低以及只能仿真自身硬件支持的指令(如intel芯片不能仿真amd芯片指令)。

另外在纯软件实现过程中,host os(可以看作位于ring 1)执行敏感指令时,vmm层(位于ring 0)可以捕获大部分敏感指令。而由于虚拟化漏洞存在,采用二进制翻译技术来解决,即通过将敏感指令翻译成二进制形式执行。但这也较为麻烦,因此芯片厂商如intel,amd等提供支持虚拟化的硬件来支持,如Intel virtualization Technology(Intel VT)和AMD的 AMD virtualization(AMD V)。下边主要讲解intel vt技术(关于vt技术之前了解过可实现某种调试器来过某些游戏检测,做一些游戏辅助,感兴趣可以了解了解)

——软件虚拟化结构图如下——-

image-20230701164811737

intel硬件虚拟化VT_X技术

传统的IA-32架构下,linux有两种状态模式,即用户模式和内核模式,虚拟化VT-x技术将传统的IA-32进行扩展,引入两种用于虚拟化的操作模式,根模式(VMX root operatiomn)和非根模式(VMX non-root operation)。这两种模式统称为vmx模式。根模式和非根模式都有四个特权级(ring0-ring3)与传统os类似,但根模式下运行的一般是kvm内核模块以及宿主机,非根模式下运行的则是虚拟机,通过这种机制,来进行状态隔离(隔离host/guest)以及状态转换(从host切换到guest,这种状态保存切换思想在linux内核中比比皆是,如从用户态切换到内核态,进线程切换,系统调用实现过程)。

host机和guest机具体运行模式如下:

  1. VMM(KVM内核模块)运行在根模式的Ring0
  2. 宿主机上的用户态应用程序运行于根模式的Ring3
  3. 虚拟机上的 OS Kernel 运行于非根模式的Ring0
  4. 虚拟机上的用户态应用程序运行于非根模式的Ring3

正如上文谈到的状态切换,类比于系统调用时从用户态切换至内核态使用栈来进行当前状态信息的保存和恢复。在vmx也提供了vmcs(Virtual-Machine Control Structure) 结构,来进行从host到guest上下文切换时保存状态信息,vt-x提供VM_ENTRY和VM_EXIT来进行两种状态切换,在发生VM-Exit时,硬件自动保存当前的上下文环境到VMCS的客户机状态域中,同时从VMCS的宿主机状态域中加载信息到CPU中;在发生VM-Entry时,CPU自动从VMCS的客户机状态域中加载信息CPU中(注意并不保存宿主机状态,因为每次都是相同的)。这样就实现了由硬件完成状态的切换。

VMCS结构

VMCS是保存在内存中一个数据结构,大小不超过4K,申请内存时,起始地址须对齐到4K边界上,每个VMCS对应到一个虚拟CPU,在使用时需要与物理CPU绑定,在任意时刻,一个VMCS只能与一个物理CPU形成一对一的绑定关系,VT-x技术提供了两条指令来执行VMCS的绑定与解绑定:

VMPTRLD <VMCS地址>: 将指定地址的VMCS与执行该指令的物理进行CPU绑定。

VMCLEAR : 用于解除VMCS与物理CPU建立的绑定关系。

VM_CS结构通常保存以下信息

  1. 客户机状态域

    1. 客户机环境时的上下文信息,寄存器等
  2. 宿主机状态域

  3. VM-Entry控制域

    1. 控制VM-Entry的过程,如确定需要加载的MSR寄存器、根据相关标志位判断是否进入虚拟机等
  4. VM-Execution控制域

  5. VM-Exit控制域

  6. VM-Exit信息域

KVM原理架构

KVM已在linux内核中被集成,可在linux virt模块下看到相关的源代码。kvm是由一个运行在用户空间的Qemu-kvm进程和两个内核模块 (kvm.ko kvm-intel.ko) 组成,内核模块在加载过程中注册一个字符设备/dev/kvm,并向上层应用程序提供API接口,Qemu-kvm进程通过该字符设备提供的ioctl系统调用向内核请求服务,如创建虚拟机、创建VCPU、启动VCPU的运行等;从功能层面划分,内核模块负责创建虚拟机,完成CPU的虚拟化和内存的虚拟化,而Qemu-kvm进程负责I/O设备、虚拟LAPIC的模拟等;

虚拟机的运行流程

  1. 运行在用户态的Qemu-kvm通过ioctl系统调用操作/dev/kvm字符设备,开始创建VM和VCPU;
  2. 内核KVM模块创建相关数据结构并初始化,然后返回文件描述到用户态;
  3. Qemu-kvm通过ioctl系统调用运行VCPU,即调度相应的VM运行;
  4. 内核进行相关处理后,执行VMLAUNCH指令,通过VM-Entry进入Guest OS运行,Guest OS运行于非根模式下;
  5. Guest OS执行相应的虚拟机代码,非敏感指令可直接在物理CPU上运行;
  6. 当Guest OS中执行到敏感指令、发生外部中断、或Guest OS发生内部异常时,将产生VM-Exit,并将相关信息记录到VMCS结构中;
  7. VM-Exit返回到VMM执行环境下,VMM从VMCS结构中读取VM-Exit的原因;
  8. 如是I/O操作或是其他外设指令,则返回到用户态Qemu-kvm(即根模式下的Ring3),由Qemu-kvm完成对相关指令的模拟;
  9. 如果不是,则由VMM自行处理;
  10. 处理完成后,重新VM-entry进入到Guest OS运行。

qemu-kvm源码分析虚拟机创建及cpu虚拟化

启动虚拟机后,查看进程可见存在用户态qemu-kvm

image-20230701164636891

虚拟机创建由用户态的qemu-kvm启动进程和内核的kvm模块(kvm.ko&kvm-intel.ko)共同完成。内核模块加载过程中注册字符设备/dev/kvm,之后由用户态进程执行ioctl系统调用操作字符设备,接下来内核模块负责创建相关数据结构以及fd,并返回给用户空间。

下面从qemu/kvm源码来分析相关流程,首先是是创建虚拟机用户态的qemu-kvm进程

(以下代码qemu版本 =5.1.0 ,kvm版本 = 3.4.x)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
int kvm_init(QEMUMachine *machine){
KVMState *s;
s->vmfd = -1;
s->fd = qemu_open("/dev/kvm", O_RDWR); //打开一个/dev/kvm字符设备

ret = kvm_ioctl(s, KVM_GET_API_VERSION, 0);

ret = kvm_ioctl(s, KVM_CREATE_VM, type); // 通过ioctl系统调用创建vm

ret = kvm_arch_init(s);
ret = kvm_irqchip_create(s);

memory_listener_register(&kvm_memory_listener, &address_space_memory);
memory_listener_register(&kvm_io_listener, &address_space_io);

s->many_ioeventfds = kvm_check_many_ioeventfds();

cpu_interrupt_handler = kvm_handle_interrupt;
}

int kvm_ioctl(KVMState *s, int type, ...){ //此函数是对ioctl系统调用的检查和封装
trace_kvm_ioctl(type, arg);
ret = ioctl(s->fd, type, arg);
}

通过系统调用KVM_CREATE_VM创建虚拟机,跳转至kvm查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
static long kvm_dev_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
case KVM_CREATE_VM:
r = kvm_dev_ioctl_create_vm(arg);
break;
}

static int kvm_dev_ioctl_create_vm(unsigned long type)
{
struct kvm *kvm;
kvm = kvm_create_vm(type);
file = anon_inode_getfile("kvm-vm", &kvm_vm_fops, kvm, O_RDWR);
}

创建kvm结构体,一个虚拟机就对应一个kvm结构体,包含内存、中断、vcpu、I/O总线等信息。对虚拟机
的操作就是对这个结构体进行操作。下面来看看kvm结构体部分重要成员

struct kvm {
rwlock_t mmu_lock;
spinlock_t mmu_lock;

struct mutex slots_lock;

/*
* Protects the kvm_memory_slots
*/
struct mutex slots_arch_lock;
struct mm_struct *mm; //用户进程的地址空间
unsigned long nr_memslot_pages;

/* The two memslot sets - active and inactive (per address space) */
//guest gpa(客户机物理地址)和host hva(宿主机虚拟地址)映射关系
struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2];

/* The current active memslot set for each address space */
struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];

//虚拟机创建的所有vcpu数组
struct xarray vcpu_array;
/*
* Protected by slots_lock, but can be read outside if an
* incorrect answer is acceptable.
*/
atomic_t nr_memslots_dirty_logging;

/* Used to wait for completion of MMU notifiers. */
spinlock_t mn_invalidate_lock;
unsigned long mn_active_invalidate_count;
struct rcuwait mn_memslots_update_rcuwait;

/* For management / invalidation of gfn_to_pfn_caches */
spinlock_t gpc_lock;
struct list_head gpc_list;

//创建的虚拟机添加到双向链表,便利链表即可遍历虚拟机
struct list_head vm_list;
struct mutex lock;

//虚拟机中的io总线数组
struct kvm_io_bus __rcu *buses[KVM_NR_BUSES];

struct {
spinlock_t lock;
struct list_head items;
/* resampler_list update side is protected by resampler_lock. */
struct list_head resampler_list;
struct mutex resampler_lock;
} irqfds;
struct list_head ioeventfds;

//虚拟机的运行状态信息,包括mmu,页表等。
struct kvm_vm_stat stat;
//架构相关
struct kvm_arch arch;
//对该kvm的引用数
refcount_t users_count;
};

接下来看看kvm_create_vm()具体

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
//主要就是对kvm结构体初始化一些工作
static struct kvm *kvm_create_vm(unsigned long type)
{
struct kvm *kvm = kvm_arch_alloc_vm();

if (!kvm)
return ERR_PTR(-ENOMEM);
...
//将mm指向当前进程空间(即用户态进程qemu-kvm进程空间)
kvm->mm = current->mm;

r = kvm_arch_init_vm(kvm, type);
if (r)
goto out_err_no_disable;

r = hardware_enable_all();
if (r)
goto out_err_no_disable;

//为虚拟机分配内存空间
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
struct kvm_memslots *slots = kvm_alloc_memslots();
if (!slots)
goto out_err_no_srcu;
/*
* Generations must be different for each address space.
* Init kvm generation close to the maximum to easily test the
* code of handling generation number wrap-around.
*/
slots->generation = i * 2 - 150;
rcu_assign_pointer(kvm->memslots[i], slots);
}

//初始化io总线,为相关总线分配内存空间
for (i = 0; i < KVM_NR_BUSES; i++) {
kvm->buses[i] = kzalloc(sizeof(struct kvm_io_bus),
GFP_KERNEL);
if (!kvm->buses[i])
goto out_err;
}

return kvm;
}

vcpu的创建

vcpu在kvm中本质就是一个结构体,vcpu的具体创建流程是

  1. 为VCPU分配标识号;
  2. 初始化虚拟寄存器组,也就是虚拟机的上下文执行环境,寄存器初始化的值是依据物理机刚上电时,硬件对寄存器的初始值,在VT-x技术中,这些数据单独存放在VMCS结构;
  3. 初始化VCPU的状态信息,虚拟机是断断续续执行的,须维护一个运行状态值
  4. 初始化其它额外的一些寄存器信息。

从用户进程qemu-kvm分析vcpu的创建,创建一个虚拟机(VM),其实是在系统中创建一个应用进程(启动一个Qemu-kvm进程),而创建VCPU则是在该应用进程启动一个线程。用户空间代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
//创建线程
static void qemu_kvm_start_vcpu(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];

cpu->thread = g_malloc0(sizeof(QemuThread));
cpu->halt_cond = g_malloc0(sizeof(QemuCond));
qemu_cond_init(cpu->halt_cond);
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/KVM",
cpu->cpu_index);
//创建线程
qemu_thread_create(cpu->thread, thread_name, qemu_kvm_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
}


static void *qemu_kvm_cpu_thread_fn(void *arg)
{
CPUState *cpu = arg;
int r;

rcu_register_thread();
cpu->thread_id = qemu_get_thread_id();
cpu->can_do_io = 1;
current_cpu = cpu;

//创建vcpu
r = kvm_init_vcpu(cpu);
if (r < 0) {
error_report("kvm_init_vcpu failed: %s", strerror(-r));
exit(1);
}

kvm_init_cpu_signals(cpu);

/* signal CPU creation */
cpu->created = true;
//通过信号量唤醒主线程
qemu_cond_signal(&qemu_cpu_cond);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
//启动cpu,循环
do {
if (cpu_can_run(cpu)) {
r = kvm_cpu_exec(cpu);
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
}
}
当guest执行io执行时会发生VM-exit,最终返回qemu-kvm进程,在该进程中执行io相关
操作,之后调用qemu_kvm_wait_to_event函数,处理完毕后调用kvm_cpu_exec函数
继续启动vcpu继续以非根模式运行guest
qemu_wait_io_event(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
}

int kvm_init_vcpu(CPUState *cpu){
DPRINTF("kvm_init_vcpu\n");
//调用kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);创建cpu
ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
if (ret < 0) {
DPRINTF("kvm_create_vcpu failed\n");
goto err;
}
//通过系统调用ioctl
mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
}

跟入内核代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
static long kvm_vm_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
case KVM_CREATE_VCPU:
r = kvm_vm_ioctl_create_vcpu(kvm, arg);
break;
}

static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
{
int r;
struct kvm_vcpu *vcpu;

一些判断。。。

kvm_arch_vcpu_create()
初始化vcpu结构体,跟进发现是主要是设置一些VCPU进入非根模式下寄存器的相关信息
r = kvm_arch_vcpu_setup(vcpu);
if (r)
goto vcpu_destroy;

r = kvm_create_vcpu_debugfs(vcpu);
if (r)
goto vcpu_destroy;

mutex_lock(&kvm->lock);
if (kvm_get_vcpu_by_id(kvm, id)) {
r = -EEXIST;
goto unlock_vcpu_destroy;
}

/* Now it's all set up, let userspace reach it */
kvm_get_kvm(kvm);

创建vcpu
r = create_vcpu_fd(vcpu);
if (r < 0) {
kvm_put_kvm(kvm);
goto unlock_vcpu_destroy;
}

将vcpu添加到kvm结构体中
kvm->vcpus[atomic_read(&kvm->online_vcpus)] = vcpu;

kvm_arch_vcpu_postcreate(vcpu);
return r;

unlock_vcpu_destroy:
mutex_unlock(&kvm->lock);
debugfs_remove_recursive(vcpu->debugfs_dentry);
vcpu_destroy:
kvm_arch_vcpu_destroy(vcpu);
vcpu_decrement:
mutex_lock(&kvm->lock);
kvm->created_vcpus--;
mutex_unlock(&kvm->lock);
return r;
}

vcpu创建好,且已于物理cpu绑定,接下来就是vcpu的运行

由前文可知,vcpu其实是qemu-kvm进程创建的一个线程,那么vcpu的执行其实就是线程的调度,虚拟机的运行是通过对 vcpu的文件描述符以KVM_RUN命令执行ioctl来实现的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
r = vcpu_load(vcpu);
if (r)
return r;
switch (ioctl) {
case KVM_RUN:
r = -EINVAL;
if (arg)
goto out;
if (unlikely(vcpu->pid != current->pids[PIDTYPE_PID].pid)) {
/* The thread running this VCPU changed. */
struct pid *oldpid = vcpu->pid;
struct pid *newpid = get_task_pid(current, PIDTYPE_PID);

rcu_assign_pointer(vcpu->pid, newpid);
if (oldpid)
synchronize_rcu();
put_pid(oldpid);
}
r = kvm_arch_vcpu_ioctl_run(vcpu, vcpu->run);
trace_kvm_userspace_exit(vcpu->run->exit_reason, r);
break;


int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
{

ret = kvm_vcpu_first_run_init(vcpu);
if (ret)
return ret;
判断
if (run->exit_reason == KVM_EXIT_MMIO) {
ret = kvm_handle_mmio_return(vcpu, vcpu->run);
if (ret)
return ret;
}
run->exit_reason = KVM_EXIT_UNKNOWN;

while (ret > 0) {
屏蔽信号

关键~~~~

/**************************************************************
* Enter the guest
*/
trace_kvm_entry(*vcpu_pc(vcpu));
guest_enter_irqoff();
vcpu->mode = IN_GUEST_MODE;

ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);

vcpu->mode = OUTSIDE_GUEST_MODE;
vcpu->stat.exits++;
/*
* Back from guest
*************************************************************/
}


//该函数在当前kvm版本中未找到,应该是版本太低的原因,引用KVM 6.4.10
int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
{
host guest上下文环境
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
struct kvm_s2_mmu *mmu;

host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
host_ctxt->__hyp_running_vcpu = vcpu;
guest_ctxt = &vcpu->arch.ctxt;

pmu_switch_needed = __pmu_switch_to_guest(vcpu);

__sysreg_save_state_nvhe(host_ctxt);

__debug_save_host_buffers_nvhe(vcpu);

__kvm_adjust_pc(vcpu);

__sysreg32_restore_state(vcpu);
__sysreg_restore_state_nvhe(guest_ctxt);

mmu = kern_hyp_va(vcpu->arch.hw_mmu);
__load_stage2(mmu, kern_hyp_va(mmu->arch));
__activate_traps(vcpu);

__hyp_vgic_restore_state(vcpu);
__timer_enable_traps(vcpu);

__debug_switch_to_guest(vcpu);

do {
/* Jump in the fire! */
exit_code = __guest_enter(vcpu);

/* And we're baaack! */
} while (fixup_guest_exit(vcpu, &exit_code));

__sysreg_save_state_nvhe(guest_ctxt);
__sysreg32_save_state(vcpu);
__timer_disable_traps(vcpu);
__hyp_vgic_save_state(vcpu);

/*
* Same thing as before the guest run: we're about to switch
* the MMU context, so let's make sure we don't have any
* ongoing EL1&0 translations.
*/
dsb(nsh);

__deactivate_traps(vcpu);
__load_host_stage2();

__sysreg_restore_state_nvhe(host_ctxt);

if (vcpu->arch.fp_state == FP_STATE_GUEST_OWNED)
__fpsimd_save_fpexc32(vcpu);

__debug_switch_to_host(vcpu);
/*
* This must come after restoring the host sysregs, since a non-VHE
* system may enable SPE here and make use of the TTBRs.
*/
__debug_restore_host_buffers_nvhe(vcpu);

if (pmu_switch_needed)
__pmu_switch_to_host(vcpu);

/* Returning to host will clear PSR.I, remask PMR if needed */
if (system_uses_irq_prio_masking())
gic_write_pmr(GIC_PRIO_IRQOFF);

host_ctxt->__hyp_running_vcpu = NULL;

return exit_code;
}

最终调用的vcpu函数,不同kernel实现方式不同,但大同小异
static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
{
处理挂起的请求,包括定时器迁移、TLB刷新、主时钟更新、MMU同步等
if (kvm_request_pending(vcpu)) {

}

static_call(kvm_x86_prepare_switch_to_guest)(vcpu);


for (;;) {
WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) != kvm_vcpu_apicv_active(vcpu)) &&
(kvm_get_apic_mode(vcpu) != LAPIC_MODE_DISABLED));

exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu);
if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
break;

if (kvm_lapic_enabled(vcpu))
static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);

if (unlikely(kvm_vcpu_exit_request(vcpu))) {
exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
break;
}

/* Note, VM-Exits that go down the "slow" path are accounted below. */
++vcpu->stat.exits;
}

/*
* Do this here before restoring debug registers on the host. And
* since we do this before handling the vmexit, a DR access vmexit
* can (a) read the correct value of the debug registers, (b) set
* KVM_DEBUGREG_WONT_EXIT again.
*/
if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
kvm_update_dr0123(vcpu);
kvm_update_dr7(vcpu);
}

/*
* If the guest has used debug registers, at least dr7
* will be disabled while returning to the host.
* If we don't have active breakpoints in the host, we don't
* care about the messed up debug address registers. But if
* we have some of them active, restore the old state.
*/
if (hw_breakpoint_active())
hw_breakpoint_restore();

vcpu->arch.last_vmentry_cpu = vcpu->cpu;
vcpu->arch.last_guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());

vcpu->mode = OUTSIDE_GUEST_MODE;
smp_wmb();

/*
* Sync xfd before calling handle_exit_irqoff() which may
* rely on the fact that guest_fpu::xfd is up-to-date (e.g.
* in #NM irqoff handler).
*/
if (vcpu->arch.xfd_no_write_intercept)
fpu_sync_guest_vmexit_xfd_state();

static_call(kvm_x86_handle_exit_irqoff)(vcpu);

if (vcpu->arch.guest_fpu.xfd_err)
wrmsrl(MSR_IA32_XFD_ERR, 0);

/*
* Consume any pending interrupts, including the possible source of
* VM-Exit on SVM and any ticks that occur between VM-Exit and now.
* An instruction is required after local_irq_enable() to fully unblock
* interrupts on processors that implement an interrupt shadow, the
* stat.exits increment will do nicely.
*/
kvm_before_interrupt(vcpu, KVM_HANDLING_IRQ);
local_irq_enable();
++vcpu->stat.exits;
local_irq_disable();
kvm_after_interrupt(vcpu);

guest_timing_exit_irqoff();

local_irq_enable();
preempt_enable();

kvm_vcpu_srcu_read_lock(vcpu);

/*
* Profile KVM exit RIPs:
*/
if (unlikely(prof_on == KVM_PROFILING)) {
unsigned long rip = kvm_rip_read(vcpu);
profile_hit(KVM_PROFILING, (void *)rip);
}

if (unlikely(vcpu->arch.tsc_always_catchup))
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);

if (vcpu->arch.apic_attention)
kvm_lapic_sync_from_vapic(vcpu);

r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
return r;
}

夹杂着一些汇编指令代码又臭又长,无非是处理一下阻塞事件,中断异常,信号处理,设置堆栈等相关变量等等

VCPU的退出

在非根模式下,客户机占用处理机执行大部分的非敏感指令,直到客户机执行到了一条敏感指令,或者因为中断、异常发生VM-Exit,回到VMM执行环境;

在发生VM-Exit后从非根模式切换到根模式,RIP(EIP)指向了 .Lkvm_vmx_return,即在VMM环境中从这里开始执行(从宿主机状态域中CS : RIP中加载):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu){
vmx_vcpu_enter_exit(vcpu, __vmx_vcpu_run_flags(vmx));
}

static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
unsigned int flags)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);

guest_state_enter_irqoff();

vmx_disable_fb_clear(vmx);

if (vcpu->arch.cr2 != native_read_cr2())
native_write_cr2(vcpu->arch.cr2);

vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
flags);

处理了因为中断或者异常等原因导致的VM-Exit
if (unlikely(vmx->fail))
vmx->exit_reason.full = 0xdead;
else
vmx->exit_reason.full = vmcs_read32(VM_EXIT_REASON);

if ((u16)vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI &&
is_nmi(vmx_get_intr_info(vcpu))) {
kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
vmx_do_nmi_irqoff();
kvm_after_interrupt(vcpu);
}

guest_state_exit_irqoff();
}
CATALOG
  1. 1. KVM架构原理-CPU虚拟化
    1. 1.1. 何为虚拟化:
    2. 1.2. intel硬件虚拟化VT_X技术
    3. 1.3. VMCS结构
    4. 1.4. KVM原理架构
    5. 1.5. qemu-kvm源码分析虚拟机创建及cpu虚拟化