VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/CPUM.cpp@ 93609

Last change on this file since 93609 was 93554, checked in by vboxsync, 3 years ago

VMM: Changed PAGE_SIZE -> GUEST_PAGE_SIZE / HOST_PAGE_SIZE, PAGE_SHIFT -> GUEST_PAGE_SHIFT / HOST_PAGE_SHIFT, and PAGE_OFFSET_MASK -> GUEST_PAGE_OFFSET_MASK / HOST_PAGE_OFFSET_MASK. Also removed most usage of ASMMemIsZeroPage and ASMMemZeroPage since the host and guest page size doesn't need to be the same any more. Some work left to do in the page pool code. bugref:9898

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 239.4 KB
Line 
1/* $Id: CPUM.cpp 93554 2022-02-02 22:57:02Z vboxsync $ */
2/** @file
3 * CPUM - CPU Monitor / Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2022 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_cpum CPUM - CPU Monitor / Manager
19 *
20 * The CPU Monitor / Manager keeps track of all the CPU registers. It is
21 * also responsible for lazy FPU handling and some of the context loading
22 * in raw mode.
23 *
24 * There are three CPU contexts, the most important one is the guest one (GC).
25 * When running in raw-mode (RC) there is a special hyper context for the VMM
26 * part that floats around inside the guest address space. When running in
27 * raw-mode, CPUM also maintains a host context for saving and restoring
28 * registers across world switches. This latter is done in cooperation with the
29 * world switcher (@see pg_vmm).
30 *
31 * @see grp_cpum
32 *
33 * @section sec_cpum_fpu FPU / SSE / AVX / ++ state.
34 *
35 * TODO: proper write up, currently just some notes.
36 *
37 * The ring-0 FPU handling per OS:
38 *
39 * - 64-bit Windows uses XMM registers in the kernel as part of the calling
40 * convention (Visual C++ doesn't seem to have a way to disable
41 * generating such code either), so CR0.TS/EM are always zero from what I
42 * can tell. We are also forced to always load/save the guest XMM0-XMM15
43 * registers when entering/leaving guest context. Interrupt handlers
44 * using FPU/SSE will offically have call save and restore functions
45 * exported by the kernel, if the really really have to use the state.
46 *
47 * - 32-bit windows does lazy FPU handling, I think, probably including
48 * lazying saving. The Windows Internals book states that it's a bad
49 * idea to use the FPU in kernel space. However, it looks like it will
50 * restore the FPU state of the current thread in case of a kernel \#NM.
51 * Interrupt handlers should be same as for 64-bit.
52 *
53 * - Darwin allows taking \#NM in kernel space, restoring current thread's
54 * state if I read the code correctly. It saves the FPU state of the
55 * outgoing thread, and uses CR0.TS to lazily load the state of the
56 * incoming one. No idea yet how the FPU is treated by interrupt
57 * handlers, i.e. whether they are allowed to disable the state or
58 * something.
59 *
60 * - Linux also allows \#NM in kernel space (don't know since when), and
61 * uses CR0.TS for lazy loading. Saves outgoing thread's state, lazy
62 * loads the incoming unless configured to agressivly load it. Interrupt
63 * handlers can ask whether they're allowed to use the FPU, and may
64 * freely trash the state if Linux thinks it has saved the thread's state
65 * already. This is a problem.
66 *
67 * - Solaris will, from what I can tell, panic if it gets an \#NM in kernel
68 * context. When switching threads, the kernel will save the state of
69 * the outgoing thread and lazy load the incoming one using CR0.TS.
70 * There are a few routines in seeblk.s which uses the SSE unit in ring-0
71 * to do stuff, HAT are among the users. The routines there will
72 * manually clear CR0.TS and save the XMM registers they use only if
73 * CR0.TS was zero upon entry. They will skip it when not, because as
74 * mentioned above, the FPU state is saved when switching away from a
75 * thread and CR0.TS set to 1, so when CR0.TS is 1 there is nothing to
76 * preserve. This is a problem if we restore CR0.TS to 1 after loading
77 * the guest state.
78 *
79 * - FreeBSD - no idea yet.
80 *
81 * - OS/2 does not allow \#NMs in kernel space IIRC. Does lazy loading,
82 * possibly also lazy saving. Interrupts must preserve the CR0.TS+EM &
83 * FPU states.
84 *
85 * Up to r107425 (2016-05-24) we would only temporarily modify CR0.TS/EM while
86 * saving and restoring the host and guest states. The motivation for this
87 * change is that we want to be able to emulate SSE instruction in ring-0 (IEM).
88 *
89 * Starting with that change, we will leave CR0.TS=EM=0 after saving the host
90 * state and only restore it once we've restore the host FPU state. This has the
91 * accidental side effect of triggering Solaris to preserve XMM registers in
92 * sseblk.s. When CR0 was changed by saving the FPU state, CPUM must now inform
93 * the VT-x (HMVMX) code about it as it caches the CR0 value in the VMCS.
94 *
95 *
96 * @section sec_cpum_logging Logging Level Assignments.
97 *
98 * Following log level assignments:
99 * - Log6 is used for FPU state management.
100 * - Log7 is used for FPU state actualization.
101 *
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_CPUM
109#include <VBox/vmm/cpum.h>
110#include <VBox/vmm/cpumdis.h>
111#include <VBox/vmm/cpumctx-v1_6.h>
112#include <VBox/vmm/pgm.h>
113#include <VBox/vmm/apic.h>
114#include <VBox/vmm/mm.h>
115#include <VBox/vmm/em.h>
116#include <VBox/vmm/iem.h>
117#include <VBox/vmm/selm.h>
118#include <VBox/vmm/dbgf.h>
119#include <VBox/vmm/hm.h>
120#include <VBox/vmm/hmvmxinline.h>
121#include <VBox/vmm/ssm.h>
122#include "CPUMInternal.h"
123#include <VBox/vmm/vm.h>
124
125#include <VBox/param.h>
126#include <VBox/dis.h>
127#include <VBox/err.h>
128#include <VBox/log.h>
129#if defined(RT_ARCH_X86) || defined(RT_ARCH_AMD64)
130# include <iprt/asm-amd64-x86.h>
131#endif
132#include <iprt/assert.h>
133#include <iprt/cpuset.h>
134#include <iprt/mem.h>
135#include <iprt/mp.h>
136#include <iprt/string.h>
137
138
139/*********************************************************************************************************************************
140* Defined Constants And Macros *
141*********************************************************************************************************************************/
142/**
143 * This was used in the saved state up to the early life of version 14.
144 *
145 * It indicates that we may have some out-of-sync hidden segement registers.
146 * It is only relevant for raw-mode.
147 */
148#define CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID RT_BIT(12)
149
150
151/*********************************************************************************************************************************
152* Structures and Typedefs *
153*********************************************************************************************************************************/
154
155/**
156 * What kind of cpu info dump to perform.
157 */
158typedef enum CPUMDUMPTYPE
159{
160 CPUMDUMPTYPE_TERSE,
161 CPUMDUMPTYPE_DEFAULT,
162 CPUMDUMPTYPE_VERBOSE
163} CPUMDUMPTYPE;
164/** Pointer to a cpu info dump type. */
165typedef CPUMDUMPTYPE *PCPUMDUMPTYPE;
166
167
168/*********************************************************************************************************************************
169* Internal Functions *
170*********************************************************************************************************************************/
171static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass);
172static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM);
173static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM);
174static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
175static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM);
176static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
179static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
180static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
181static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
182
183
184/*********************************************************************************************************************************
185* Global Variables *
186*********************************************************************************************************************************/
187/** Saved state field descriptors for CPUMCTX. */
188static const SSMFIELD g_aCpumCtxFields[] =
189{
190 SSMFIELD_ENTRY( CPUMCTX, rdi),
191 SSMFIELD_ENTRY( CPUMCTX, rsi),
192 SSMFIELD_ENTRY( CPUMCTX, rbp),
193 SSMFIELD_ENTRY( CPUMCTX, rax),
194 SSMFIELD_ENTRY( CPUMCTX, rbx),
195 SSMFIELD_ENTRY( CPUMCTX, rdx),
196 SSMFIELD_ENTRY( CPUMCTX, rcx),
197 SSMFIELD_ENTRY( CPUMCTX, rsp),
198 SSMFIELD_ENTRY( CPUMCTX, rflags),
199 SSMFIELD_ENTRY( CPUMCTX, rip),
200 SSMFIELD_ENTRY( CPUMCTX, r8),
201 SSMFIELD_ENTRY( CPUMCTX, r9),
202 SSMFIELD_ENTRY( CPUMCTX, r10),
203 SSMFIELD_ENTRY( CPUMCTX, r11),
204 SSMFIELD_ENTRY( CPUMCTX, r12),
205 SSMFIELD_ENTRY( CPUMCTX, r13),
206 SSMFIELD_ENTRY( CPUMCTX, r14),
207 SSMFIELD_ENTRY( CPUMCTX, r15),
208 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
209 SSMFIELD_ENTRY( CPUMCTX, es.ValidSel),
210 SSMFIELD_ENTRY( CPUMCTX, es.fFlags),
211 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
212 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
213 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
214 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
215 SSMFIELD_ENTRY( CPUMCTX, cs.ValidSel),
216 SSMFIELD_ENTRY( CPUMCTX, cs.fFlags),
217 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
218 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
219 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
220 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
221 SSMFIELD_ENTRY( CPUMCTX, ss.ValidSel),
222 SSMFIELD_ENTRY( CPUMCTX, ss.fFlags),
223 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
224 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
225 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
226 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
227 SSMFIELD_ENTRY( CPUMCTX, ds.ValidSel),
228 SSMFIELD_ENTRY( CPUMCTX, ds.fFlags),
229 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
230 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
231 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
232 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
233 SSMFIELD_ENTRY( CPUMCTX, fs.ValidSel),
234 SSMFIELD_ENTRY( CPUMCTX, fs.fFlags),
235 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
236 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
237 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
238 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
239 SSMFIELD_ENTRY( CPUMCTX, gs.ValidSel),
240 SSMFIELD_ENTRY( CPUMCTX, gs.fFlags),
241 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
242 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
243 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
244 SSMFIELD_ENTRY( CPUMCTX, cr0),
245 SSMFIELD_ENTRY( CPUMCTX, cr2),
246 SSMFIELD_ENTRY( CPUMCTX, cr3),
247 SSMFIELD_ENTRY( CPUMCTX, cr4),
248 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
249 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
250 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
251 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
252 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
253 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
254 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
255 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
256 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
257 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
258 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
259 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
260 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
261 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
262 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
263 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
264 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
265 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
266 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
267 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
268 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
269 SSMFIELD_ENTRY( CPUMCTX, ldtr.ValidSel),
270 SSMFIELD_ENTRY( CPUMCTX, ldtr.fFlags),
271 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
272 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
273 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
274 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
275 SSMFIELD_ENTRY( CPUMCTX, tr.ValidSel),
276 SSMFIELD_ENTRY( CPUMCTX, tr.fFlags),
277 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
278 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
279 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
280 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[0], CPUM_SAVED_STATE_VERSION_XSAVE),
281 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[1], CPUM_SAVED_STATE_VERSION_XSAVE),
282 SSMFIELD_ENTRY_VER( CPUMCTX, fXStateMask, CPUM_SAVED_STATE_VERSION_XSAVE),
283 SSMFIELD_ENTRY_TERM()
284};
285
286/** Saved state field descriptors for SVM nested hardware-virtualization
287 * Host State. */
288static const SSMFIELD g_aSvmHwvirtHostState[] =
289{
290 SSMFIELD_ENTRY( SVMHOSTSTATE, uEferMsr),
291 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr0),
292 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr4),
293 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr3),
294 SSMFIELD_ENTRY( SVMHOSTSTATE, uRip),
295 SSMFIELD_ENTRY( SVMHOSTSTATE, uRsp),
296 SSMFIELD_ENTRY( SVMHOSTSTATE, uRax),
297 SSMFIELD_ENTRY( SVMHOSTSTATE, rflags),
298 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Sel),
299 SSMFIELD_ENTRY( SVMHOSTSTATE, es.ValidSel),
300 SSMFIELD_ENTRY( SVMHOSTSTATE, es.fFlags),
301 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u64Base),
302 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u32Limit),
303 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Attr),
304 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Sel),
305 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.ValidSel),
306 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.fFlags),
307 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u64Base),
308 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u32Limit),
309 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Attr),
310 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Sel),
311 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.ValidSel),
312 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.fFlags),
313 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u64Base),
314 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u32Limit),
315 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Attr),
316 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Sel),
317 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.ValidSel),
318 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.fFlags),
319 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u64Base),
320 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u32Limit),
321 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Attr),
322 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.cbGdt),
323 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.pGdt),
324 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.cbIdt),
325 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.pIdt),
326 SSMFIELD_ENTRY_IGNORE(SVMHOSTSTATE, abPadding),
327 SSMFIELD_ENTRY_TERM()
328};
329
330/** Saved state field descriptors for VMX nested hardware-virtualization
331 * VMCS. */
332static const SSMFIELD g_aVmxHwvirtVmcs[] =
333{
334 SSMFIELD_ENTRY( VMXVVMCS, u32VmcsRevId),
335 SSMFIELD_ENTRY( VMXVVMCS, enmVmxAbort),
336 SSMFIELD_ENTRY( VMXVVMCS, fVmcsState),
337 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au8Padding0),
338 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved0),
339
340 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, u16Reserved0),
341
342 SSMFIELD_ENTRY( VMXVVMCS, u32RoVmInstrError),
343 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitReason),
344 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitIntInfo),
345 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitIntErrCode),
346 SSMFIELD_ENTRY( VMXVVMCS, u32RoIdtVectoringInfo),
347 SSMFIELD_ENTRY( VMXVVMCS, u32RoIdtVectoringErrCode),
348 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitInstrLen),
349 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitInstrInfo),
350 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32RoReserved2),
351
352 SSMFIELD_ENTRY( VMXVVMCS, u64RoGuestPhysAddr),
353 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved1),
354
355 SSMFIELD_ENTRY( VMXVVMCS, u64RoExitQual),
356 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRcx),
357 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRsi),
358 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRdi),
359 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRip),
360 SSMFIELD_ENTRY( VMXVVMCS, u64RoGuestLinearAddr),
361 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved5),
362
363 SSMFIELD_ENTRY( VMXVVMCS, u16Vpid),
364 SSMFIELD_ENTRY( VMXVVMCS, u16PostIntNotifyVector),
365 SSMFIELD_ENTRY( VMXVVMCS, u16EptpIndex),
366 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved0),
367
368 SSMFIELD_ENTRY( VMXVVMCS, u32PinCtls),
369 SSMFIELD_ENTRY( VMXVVMCS, u32ProcCtls),
370 SSMFIELD_ENTRY( VMXVVMCS, u32XcptBitmap),
371 SSMFIELD_ENTRY( VMXVVMCS, u32XcptPFMask),
372 SSMFIELD_ENTRY( VMXVVMCS, u32XcptPFMatch),
373 SSMFIELD_ENTRY( VMXVVMCS, u32Cr3TargetCount),
374 SSMFIELD_ENTRY( VMXVVMCS, u32ExitCtls),
375 SSMFIELD_ENTRY( VMXVVMCS, u32ExitMsrStoreCount),
376 SSMFIELD_ENTRY( VMXVVMCS, u32ExitMsrLoadCount),
377 SSMFIELD_ENTRY( VMXVVMCS, u32EntryCtls),
378 SSMFIELD_ENTRY( VMXVVMCS, u32EntryMsrLoadCount),
379 SSMFIELD_ENTRY( VMXVVMCS, u32EntryIntInfo),
380 SSMFIELD_ENTRY( VMXVVMCS, u32EntryXcptErrCode),
381 SSMFIELD_ENTRY( VMXVVMCS, u32EntryInstrLen),
382 SSMFIELD_ENTRY( VMXVVMCS, u32TprThreshold),
383 SSMFIELD_ENTRY( VMXVVMCS, u32ProcCtls2),
384 SSMFIELD_ENTRY( VMXVVMCS, u32PleGap),
385 SSMFIELD_ENTRY( VMXVVMCS, u32PleWindow),
386 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved1),
387
388 SSMFIELD_ENTRY( VMXVVMCS, u64AddrIoBitmapA),
389 SSMFIELD_ENTRY( VMXVVMCS, u64AddrIoBitmapB),
390 SSMFIELD_ENTRY( VMXVVMCS, u64AddrMsrBitmap),
391 SSMFIELD_ENTRY( VMXVVMCS, u64AddrExitMsrStore),
392 SSMFIELD_ENTRY( VMXVVMCS, u64AddrExitMsrLoad),
393 SSMFIELD_ENTRY( VMXVVMCS, u64AddrEntryMsrLoad),
394 SSMFIELD_ENTRY( VMXVVMCS, u64ExecVmcsPtr),
395 SSMFIELD_ENTRY( VMXVVMCS, u64AddrPml),
396 SSMFIELD_ENTRY( VMXVVMCS, u64TscOffset),
397 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVirtApic),
398 SSMFIELD_ENTRY( VMXVVMCS, u64AddrApicAccess),
399 SSMFIELD_ENTRY( VMXVVMCS, u64AddrPostedIntDesc),
400 SSMFIELD_ENTRY( VMXVVMCS, u64VmFuncCtls),
401 SSMFIELD_ENTRY( VMXVVMCS, u64EptPtr),
402 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap0),
403 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap1),
404 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap2),
405 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap3),
406 SSMFIELD_ENTRY( VMXVVMCS, u64AddrEptpList),
407 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVmreadBitmap),
408 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVmwriteBitmap),
409 SSMFIELD_ENTRY( VMXVVMCS, u64AddrXcptVeInfo),
410 SSMFIELD_ENTRY( VMXVVMCS, u64XssExitBitmap),
411 SSMFIELD_ENTRY( VMXVVMCS, u64EnclsExitBitmap),
412 SSMFIELD_ENTRY( VMXVVMCS, u64SppTablePtr),
413 SSMFIELD_ENTRY( VMXVVMCS, u64TscMultiplier),
414 SSMFIELD_ENTRY_VER( VMXVVMCS, u64ProcCtls3, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
415 SSMFIELD_ENTRY_VER( VMXVVMCS, u64EnclvExitBitmap, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
416 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved0),
417
418 SSMFIELD_ENTRY( VMXVVMCS, u64Cr0Mask),
419 SSMFIELD_ENTRY( VMXVVMCS, u64Cr4Mask),
420 SSMFIELD_ENTRY( VMXVVMCS, u64Cr0ReadShadow),
421 SSMFIELD_ENTRY( VMXVVMCS, u64Cr4ReadShadow),
422 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target0),
423 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target1),
424 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target2),
425 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target3),
426 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved4),
427
428 SSMFIELD_ENTRY( VMXVVMCS, HostEs),
429 SSMFIELD_ENTRY( VMXVVMCS, HostCs),
430 SSMFIELD_ENTRY( VMXVVMCS, HostSs),
431 SSMFIELD_ENTRY( VMXVVMCS, HostDs),
432 SSMFIELD_ENTRY( VMXVVMCS, HostFs),
433 SSMFIELD_ENTRY( VMXVVMCS, HostGs),
434 SSMFIELD_ENTRY( VMXVVMCS, HostTr),
435 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved2),
436
437 SSMFIELD_ENTRY( VMXVVMCS, u32HostSysenterCs),
438 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved4),
439
440 SSMFIELD_ENTRY( VMXVVMCS, u64HostPatMsr),
441 SSMFIELD_ENTRY( VMXVVMCS, u64HostEferMsr),
442 SSMFIELD_ENTRY( VMXVVMCS, u64HostPerfGlobalCtlMsr),
443 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostPkrsMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
444 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved3),
445
446 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr0),
447 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr3),
448 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr4),
449 SSMFIELD_ENTRY( VMXVVMCS, u64HostFsBase),
450 SSMFIELD_ENTRY( VMXVVMCS, u64HostGsBase),
451 SSMFIELD_ENTRY( VMXVVMCS, u64HostTrBase),
452 SSMFIELD_ENTRY( VMXVVMCS, u64HostGdtrBase),
453 SSMFIELD_ENTRY( VMXVVMCS, u64HostIdtrBase),
454 SSMFIELD_ENTRY( VMXVVMCS, u64HostSysenterEsp),
455 SSMFIELD_ENTRY( VMXVVMCS, u64HostSysenterEip),
456 SSMFIELD_ENTRY( VMXVVMCS, u64HostRsp),
457 SSMFIELD_ENTRY( VMXVVMCS, u64HostRip),
458 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostSCetMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
459 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostSsp, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
460 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostIntrSspTableAddrMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
461 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved7),
462
463 SSMFIELD_ENTRY( VMXVVMCS, GuestEs),
464 SSMFIELD_ENTRY( VMXVVMCS, GuestCs),
465 SSMFIELD_ENTRY( VMXVVMCS, GuestSs),
466 SSMFIELD_ENTRY( VMXVVMCS, GuestDs),
467 SSMFIELD_ENTRY( VMXVVMCS, GuestFs),
468 SSMFIELD_ENTRY( VMXVVMCS, GuestGs),
469 SSMFIELD_ENTRY( VMXVVMCS, GuestLdtr),
470 SSMFIELD_ENTRY( VMXVVMCS, GuestTr),
471 SSMFIELD_ENTRY( VMXVVMCS, u16GuestIntStatus),
472 SSMFIELD_ENTRY( VMXVVMCS, u16PmlIndex),
473 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved1),
474
475 SSMFIELD_ENTRY( VMXVVMCS, u32GuestEsLimit),
476 SSMFIELD_ENTRY( VMXVVMCS, u32GuestCsLimit),
477 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSsLimit),
478 SSMFIELD_ENTRY( VMXVVMCS, u32GuestDsLimit),
479 SSMFIELD_ENTRY( VMXVVMCS, u32GuestFsLimit),
480 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGsLimit),
481 SSMFIELD_ENTRY( VMXVVMCS, u32GuestLdtrLimit),
482 SSMFIELD_ENTRY( VMXVVMCS, u32GuestTrLimit),
483 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGdtrLimit),
484 SSMFIELD_ENTRY( VMXVVMCS, u32GuestIdtrLimit),
485 SSMFIELD_ENTRY( VMXVVMCS, u32GuestEsAttr),
486 SSMFIELD_ENTRY( VMXVVMCS, u32GuestCsAttr),
487 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSsAttr),
488 SSMFIELD_ENTRY( VMXVVMCS, u32GuestDsAttr),
489 SSMFIELD_ENTRY( VMXVVMCS, u32GuestFsAttr),
490 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGsAttr),
491 SSMFIELD_ENTRY( VMXVVMCS, u32GuestLdtrAttr),
492 SSMFIELD_ENTRY( VMXVVMCS, u32GuestTrAttr),
493 SSMFIELD_ENTRY( VMXVVMCS, u32GuestIntrState),
494 SSMFIELD_ENTRY( VMXVVMCS, u32GuestActivityState),
495 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSmBase),
496 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSysenterCS),
497 SSMFIELD_ENTRY( VMXVVMCS, u32PreemptTimer),
498 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved3),
499
500 SSMFIELD_ENTRY( VMXVVMCS, u64VmcsLinkPtr),
501 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDebugCtlMsr),
502 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPatMsr),
503 SSMFIELD_ENTRY( VMXVVMCS, u64GuestEferMsr),
504 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPerfGlobalCtlMsr),
505 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte0),
506 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte1),
507 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte2),
508 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte3),
509 SSMFIELD_ENTRY( VMXVVMCS, u64GuestBndcfgsMsr),
510 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRtitCtlMsr),
511 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestPkrsMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
512 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved2),
513
514 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr0),
515 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr3),
516 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr4),
517 SSMFIELD_ENTRY( VMXVVMCS, u64GuestEsBase),
518 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCsBase),
519 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSsBase),
520 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDsBase),
521 SSMFIELD_ENTRY( VMXVVMCS, u64GuestFsBase),
522 SSMFIELD_ENTRY( VMXVVMCS, u64GuestGsBase),
523 SSMFIELD_ENTRY( VMXVVMCS, u64GuestLdtrBase),
524 SSMFIELD_ENTRY( VMXVVMCS, u64GuestTrBase),
525 SSMFIELD_ENTRY( VMXVVMCS, u64GuestGdtrBase),
526 SSMFIELD_ENTRY( VMXVVMCS, u64GuestIdtrBase),
527 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDr7),
528 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRsp),
529 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRip),
530 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRFlags),
531 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPendingDbgXcpts),
532 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSysenterEsp),
533 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSysenterEip),
534 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestSCetMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
535 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestSsp, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
536 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestIntrSspTableAddrMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
537 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved6),
538
539 SSMFIELD_ENTRY_TERM()
540};
541
542/** Saved state field descriptors for CPUMCTX. */
543static const SSMFIELD g_aCpumX87Fields[] =
544{
545 SSMFIELD_ENTRY( X86FXSTATE, FCW),
546 SSMFIELD_ENTRY( X86FXSTATE, FSW),
547 SSMFIELD_ENTRY( X86FXSTATE, FTW),
548 SSMFIELD_ENTRY( X86FXSTATE, FOP),
549 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
550 SSMFIELD_ENTRY( X86FXSTATE, CS),
551 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
552 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
553 SSMFIELD_ENTRY( X86FXSTATE, DS),
554 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
555 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
556 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
557 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
558 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
559 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
560 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
561 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
562 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
563 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
564 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
565 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
566 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
567 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
568 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
569 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
570 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
571 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
572 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
573 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
574 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
575 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
576 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
577 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
578 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
579 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
580 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
581 SSMFIELD_ENTRY_VER( X86FXSTATE, au32RsrvdForSoftware[0], CPUM_SAVED_STATE_VERSION_XSAVE), /* 32-bit/64-bit hack */
582 SSMFIELD_ENTRY_TERM()
583};
584
585/** Saved state field descriptors for X86XSAVEHDR. */
586static const SSMFIELD g_aCpumXSaveHdrFields[] =
587{
588 SSMFIELD_ENTRY( X86XSAVEHDR, bmXState),
589 SSMFIELD_ENTRY_TERM()
590};
591
592/** Saved state field descriptors for X86XSAVEYMMHI. */
593static const SSMFIELD g_aCpumYmmHiFields[] =
594{
595 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[0]),
596 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[1]),
597 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[2]),
598 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[3]),
599 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[4]),
600 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[5]),
601 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[6]),
602 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[7]),
603 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[8]),
604 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[9]),
605 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[10]),
606 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[11]),
607 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[12]),
608 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[13]),
609 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[14]),
610 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[15]),
611 SSMFIELD_ENTRY_TERM()
612};
613
614/** Saved state field descriptors for X86XSAVEBNDREGS. */
615static const SSMFIELD g_aCpumBndRegsFields[] =
616{
617 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[0]),
618 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[1]),
619 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[2]),
620 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[3]),
621 SSMFIELD_ENTRY_TERM()
622};
623
624/** Saved state field descriptors for X86XSAVEBNDCFG. */
625static const SSMFIELD g_aCpumBndCfgFields[] =
626{
627 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fConfig),
628 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fStatus),
629 SSMFIELD_ENTRY_TERM()
630};
631
632#if 0 /** @todo */
633/** Saved state field descriptors for X86XSAVEOPMASK. */
634static const SSMFIELD g_aCpumOpmaskFields[] =
635{
636 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[0]),
637 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[1]),
638 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[2]),
639 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[3]),
640 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[4]),
641 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[5]),
642 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[6]),
643 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[7]),
644 SSMFIELD_ENTRY_TERM()
645};
646#endif
647
648/** Saved state field descriptors for X86XSAVEZMMHI256. */
649static const SSMFIELD g_aCpumZmmHi256Fields[] =
650{
651 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[0]),
652 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[1]),
653 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[2]),
654 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[3]),
655 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[4]),
656 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[5]),
657 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[6]),
658 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[7]),
659 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[8]),
660 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[9]),
661 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[10]),
662 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[11]),
663 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[12]),
664 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[13]),
665 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[14]),
666 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[15]),
667 SSMFIELD_ENTRY_TERM()
668};
669
670/** Saved state field descriptors for X86XSAVEZMM16HI. */
671static const SSMFIELD g_aCpumZmm16HiFields[] =
672{
673 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[0]),
674 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[1]),
675 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[2]),
676 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[3]),
677 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[4]),
678 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[5]),
679 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[6]),
680 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[7]),
681 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[8]),
682 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[9]),
683 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[10]),
684 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[11]),
685 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[12]),
686 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[13]),
687 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[14]),
688 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[15]),
689 SSMFIELD_ENTRY_TERM()
690};
691
692
693
694/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
695 * registeres changed. */
696static const SSMFIELD g_aCpumX87FieldsMem[] =
697{
698 SSMFIELD_ENTRY( X86FXSTATE, FCW),
699 SSMFIELD_ENTRY( X86FXSTATE, FSW),
700 SSMFIELD_ENTRY( X86FXSTATE, FTW),
701 SSMFIELD_ENTRY( X86FXSTATE, FOP),
702 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
703 SSMFIELD_ENTRY( X86FXSTATE, CS),
704 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
705 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
706 SSMFIELD_ENTRY( X86FXSTATE, DS),
707 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
708 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
709 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
710 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
711 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
712 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
713 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
714 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
715 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
716 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
717 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
718 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
719 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
720 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
721 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
722 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
723 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
724 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
725 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
726 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
727 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
728 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
729 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
730 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
731 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
732 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
733 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
734 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
735 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
736};
737
738/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
739 * registeres changed. */
740static const SSMFIELD g_aCpumCtxFieldsMem[] =
741{
742 SSMFIELD_ENTRY( CPUMCTX, rdi),
743 SSMFIELD_ENTRY( CPUMCTX, rsi),
744 SSMFIELD_ENTRY( CPUMCTX, rbp),
745 SSMFIELD_ENTRY( CPUMCTX, rax),
746 SSMFIELD_ENTRY( CPUMCTX, rbx),
747 SSMFIELD_ENTRY( CPUMCTX, rdx),
748 SSMFIELD_ENTRY( CPUMCTX, rcx),
749 SSMFIELD_ENTRY( CPUMCTX, rsp),
750 SSMFIELD_ENTRY_OLD( lss_esp, sizeof(uint32_t)),
751 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
752 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
753 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
754 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
755 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
756 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
757 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
758 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
759 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
760 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
761 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
762 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
763 SSMFIELD_ENTRY( CPUMCTX, rflags),
764 SSMFIELD_ENTRY( CPUMCTX, rip),
765 SSMFIELD_ENTRY( CPUMCTX, r8),
766 SSMFIELD_ENTRY( CPUMCTX, r9),
767 SSMFIELD_ENTRY( CPUMCTX, r10),
768 SSMFIELD_ENTRY( CPUMCTX, r11),
769 SSMFIELD_ENTRY( CPUMCTX, r12),
770 SSMFIELD_ENTRY( CPUMCTX, r13),
771 SSMFIELD_ENTRY( CPUMCTX, r14),
772 SSMFIELD_ENTRY( CPUMCTX, r15),
773 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
774 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
775 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
776 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
777 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
778 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
779 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
780 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
781 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
782 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
783 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
784 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
785 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
786 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
787 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
788 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
789 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
790 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
791 SSMFIELD_ENTRY( CPUMCTX, cr0),
792 SSMFIELD_ENTRY( CPUMCTX, cr2),
793 SSMFIELD_ENTRY( CPUMCTX, cr3),
794 SSMFIELD_ENTRY( CPUMCTX, cr4),
795 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
796 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
797 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
798 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
799 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
800 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
801 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
802 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
803 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
804 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
805 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
806 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
807 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
808 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
809 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
810 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
811 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
812 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
813 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
814 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
815 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
816 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
817 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
818 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
819 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
820 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
821 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
822 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
823 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
824 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
825 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
826 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
827 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
828 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
829 SSMFIELD_ENTRY_TERM()
830};
831
832/** Saved state field descriptors for CPUMCTX_VER1_6. */
833static const SSMFIELD g_aCpumX87FieldsV16[] =
834{
835 SSMFIELD_ENTRY( X86FXSTATE, FCW),
836 SSMFIELD_ENTRY( X86FXSTATE, FSW),
837 SSMFIELD_ENTRY( X86FXSTATE, FTW),
838 SSMFIELD_ENTRY( X86FXSTATE, FOP),
839 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
840 SSMFIELD_ENTRY( X86FXSTATE, CS),
841 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
842 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
843 SSMFIELD_ENTRY( X86FXSTATE, DS),
844 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
845 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
846 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
847 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
848 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
849 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
850 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
851 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
852 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
853 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
854 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
855 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
856 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
857 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
858 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
859 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
860 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
861 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
862 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
863 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
864 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
865 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
866 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
867 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
868 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
869 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
870 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
871 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
872 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
873 SSMFIELD_ENTRY_TERM()
874};
875
876/** Saved state field descriptors for CPUMCTX_VER1_6. */
877static const SSMFIELD g_aCpumCtxFieldsV16[] =
878{
879 SSMFIELD_ENTRY( CPUMCTX, rdi),
880 SSMFIELD_ENTRY( CPUMCTX, rsi),
881 SSMFIELD_ENTRY( CPUMCTX, rbp),
882 SSMFIELD_ENTRY( CPUMCTX, rax),
883 SSMFIELD_ENTRY( CPUMCTX, rbx),
884 SSMFIELD_ENTRY( CPUMCTX, rdx),
885 SSMFIELD_ENTRY( CPUMCTX, rcx),
886 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, rsp),
887 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
888 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
889 SSMFIELD_ENTRY_OLD( CPUMCTX, sizeof(uint64_t) /*rsp_notused*/),
890 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
891 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
892 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
893 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
894 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
895 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
896 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
897 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
898 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
899 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
900 SSMFIELD_ENTRY( CPUMCTX, rflags),
901 SSMFIELD_ENTRY( CPUMCTX, rip),
902 SSMFIELD_ENTRY( CPUMCTX, r8),
903 SSMFIELD_ENTRY( CPUMCTX, r9),
904 SSMFIELD_ENTRY( CPUMCTX, r10),
905 SSMFIELD_ENTRY( CPUMCTX, r11),
906 SSMFIELD_ENTRY( CPUMCTX, r12),
907 SSMFIELD_ENTRY( CPUMCTX, r13),
908 SSMFIELD_ENTRY( CPUMCTX, r14),
909 SSMFIELD_ENTRY( CPUMCTX, r15),
910 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, es.u64Base),
911 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
912 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
913 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, cs.u64Base),
914 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
915 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
916 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ss.u64Base),
917 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
918 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
919 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ds.u64Base),
920 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
921 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
922 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, fs.u64Base),
923 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
924 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
925 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gs.u64Base),
926 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
927 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
928 SSMFIELD_ENTRY( CPUMCTX, cr0),
929 SSMFIELD_ENTRY( CPUMCTX, cr2),
930 SSMFIELD_ENTRY( CPUMCTX, cr3),
931 SSMFIELD_ENTRY( CPUMCTX, cr4),
932 SSMFIELD_ENTRY_OLD( cr8, sizeof(uint64_t)),
933 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
934 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
935 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
936 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
937 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
938 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
939 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
940 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
941 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
942 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gdtr.pGdt),
943 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
944 SSMFIELD_ENTRY_OLD( gdtrPadding64, sizeof(uint64_t)),
945 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
946 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, idtr.pIdt),
947 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
948 SSMFIELD_ENTRY_OLD( idtrPadding64, sizeof(uint64_t)),
949 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
950 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
951 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
952 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
953 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
954 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
955 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
956 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
957 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
958 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
959 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
960 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
961 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
962 SSMFIELD_ENTRY_OLD( msrFSBASE, sizeof(uint64_t)),
963 SSMFIELD_ENTRY_OLD( msrGSBASE, sizeof(uint64_t)),
964 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
965 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ldtr.u64Base),
966 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
967 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
968 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, tr.u64Base),
969 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
970 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
971 SSMFIELD_ENTRY_OLD( padding, sizeof(uint32_t)*2),
972 SSMFIELD_ENTRY_TERM()
973};
974
975
976#if defined(RT_ARCH_X86) || defined(RT_ARCH_AMD64)
977/**
978 * Checks for partial/leaky FXSAVE/FXRSTOR handling on AMD CPUs.
979 *
980 * AMD K7, K8 and newer AMD CPUs do not save/restore the x87 error pointers
981 * (last instruction pointer, last data pointer, last opcode) except when the ES
982 * bit (Exception Summary) in x87 FSW (FPU Status Word) is set. Thus if we don't
983 * clear these registers there is potential, local FPU leakage from a process
984 * using the FPU to another.
985 *
986 * See AMD Instruction Reference for FXSAVE, FXRSTOR.
987 *
988 * @param pVM The cross context VM structure.
989 */
990static void cpumR3CheckLeakyFpu(PVM pVM)
991{
992 uint32_t u32CpuVersion = ASMCpuId_EAX(1);
993 uint32_t const u32Family = u32CpuVersion >> 8;
994 if ( u32Family >= 6 /* K7 and higher */
995 && (ASMIsAmdCpu() || ASMIsHygonCpu()) )
996 {
997 uint32_t cExt = ASMCpuId_EAX(0x80000000);
998 if (RTX86IsValidExtRange(cExt))
999 {
1000 uint32_t fExtFeaturesEDX = ASMCpuId_EDX(0x80000001);
1001 if (fExtFeaturesEDX & X86_CPUID_AMD_FEATURE_EDX_FFXSR)
1002 {
1003 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1004 {
1005 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
1006 pVCpu->cpum.s.fUseFlags |= CPUM_USE_FFXSR_LEAKY;
1007 }
1008 Log(("CPUM: Host CPU has leaky fxsave/fxrstor behaviour\n"));
1009 }
1010 }
1011 }
1012}
1013#endif
1014
1015
1016/**
1017 * Initialize SVM hardware virtualization state (used to allocate it).
1018 *
1019 * @param pVM The cross context VM structure.
1020 */
1021static void cpumR3InitSvmHwVirtState(PVM pVM)
1022{
1023 Assert(pVM->cpum.s.GuestFeatures.fSvm);
1024
1025 LogRel(("CPUM: AMD-V nested-guest init\n"));
1026 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1027 {
1028 PVMCPU pVCpu = pVM->apCpusR3[i];
1029 pVCpu->cpum.s.Guest.hwvirt.enmHwvirt = CPUMHWVIRT_SVM;
1030
1031 AssertCompile(SVM_VMCB_PAGES * X86_PAGE_SIZE == sizeof(pVCpu->cpum.s.Guest.hwvirt.svm.Vmcb));
1032 AssertCompile(SVM_MSRPM_PAGES * X86_PAGE_SIZE == sizeof(pVCpu->cpum.s.Guest.hwvirt.svm.abMsrBitmap));
1033 AssertCompile(SVM_IOPM_PAGES * X86_PAGE_SIZE == sizeof(pVCpu->cpum.s.Guest.hwvirt.svm.abIoBitmap));
1034 }
1035}
1036
1037
1038/**
1039 * Resets per-VCPU SVM hardware virtualization state.
1040 *
1041 * @param pVCpu The cross context virtual CPU structure.
1042 */
1043DECLINLINE(void) cpumR3ResetSvmHwVirtState(PVMCPU pVCpu)
1044{
1045 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1046 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_SVM);
1047
1048 RT_ZERO(pCtx->hwvirt.svm.Vmcb);
1049 pCtx->hwvirt.svm.uMsrHSavePa = 0;
1050 pCtx->hwvirt.svm.uPrevPauseTick = 0;
1051}
1052
1053
1054/**
1055 * Allocates memory for the VMX hardware virtualization state.
1056 *
1057 * @param pVM The cross context VM structure.
1058 */
1059static void cpumR3InitVmxHwVirtState(PVM pVM)
1060{
1061 LogRel(("CPUM: VT-x nested-guest init\n"));
1062 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1063 {
1064 PVMCPU pVCpu = pVM->apCpusR3[i];
1065 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1066
1067 pCtx->hwvirt.enmHwvirt = CPUMHWVIRT_VMX;
1068
1069 AssertCompile(sizeof(pCtx->hwvirt.vmx.Vmcs) == VMX_V_VMCS_PAGES * X86_PAGE_SIZE);
1070 AssertCompile(sizeof(pCtx->hwvirt.vmx.Vmcs) == VMX_V_VMCS_SIZE);
1071 AssertCompile(sizeof(pCtx->hwvirt.vmx.ShadowVmcs) == VMX_V_SHADOW_VMCS_PAGES * X86_PAGE_SIZE);
1072 AssertCompile(sizeof(pCtx->hwvirt.vmx.ShadowVmcs) == VMX_V_SHADOW_VMCS_SIZE);
1073 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmreadBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_PAGES * X86_PAGE_SIZE);
1074 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmreadBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
1075 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmwriteBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_PAGES * X86_PAGE_SIZE);
1076 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmwriteBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
1077 AssertCompile(sizeof(pCtx->hwvirt.vmx.aEntryMsrLoadArea) == VMX_V_AUTOMSR_AREA_PAGES * X86_PAGE_SIZE);
1078 AssertCompile(sizeof(pCtx->hwvirt.vmx.aEntryMsrLoadArea) == VMX_V_AUTOMSR_AREA_SIZE);
1079 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrStoreArea) == VMX_V_AUTOMSR_AREA_PAGES * X86_PAGE_SIZE);
1080 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrStoreArea) == VMX_V_AUTOMSR_AREA_SIZE);
1081 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrLoadArea) == VMX_V_AUTOMSR_AREA_PAGES * X86_PAGE_SIZE);
1082 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrLoadArea) == VMX_V_AUTOMSR_AREA_SIZE);
1083 AssertCompile(sizeof(pCtx->hwvirt.vmx.abMsrBitmap) == VMX_V_MSR_BITMAP_PAGES * X86_PAGE_SIZE);
1084 AssertCompile(sizeof(pCtx->hwvirt.vmx.abMsrBitmap) == VMX_V_MSR_BITMAP_SIZE);
1085 AssertCompile(sizeof(pCtx->hwvirt.vmx.abIoBitmap) == (VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES) * X86_PAGE_SIZE);
1086 AssertCompile(sizeof(pCtx->hwvirt.vmx.abIoBitmap) == VMX_V_IO_BITMAP_A_SIZE + VMX_V_IO_BITMAP_B_SIZE);
1087 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVirtApicPage) == VMX_V_VIRT_APIC_PAGES * X86_PAGE_SIZE);
1088 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVirtApicPage) == VMX_V_VIRT_APIC_SIZE);
1089
1090 /*
1091 * Zero out all allocated pages (should compress well for saved-state).
1092 */
1093 /** @todo r=bird: this is and always was unnecessary - they are already zeroed. */
1094 RT_ZERO(pCtx->hwvirt.vmx.Vmcs);
1095 RT_ZERO(pCtx->hwvirt.vmx.ShadowVmcs);
1096 RT_ZERO(pCtx->hwvirt.vmx.abVmreadBitmap);
1097 RT_ZERO(pCtx->hwvirt.vmx.abVmwriteBitmap);
1098 RT_ZERO(pCtx->hwvirt.vmx.aEntryMsrLoadArea);
1099 RT_ZERO(pCtx->hwvirt.vmx.aExitMsrStoreArea);
1100 RT_ZERO(pCtx->hwvirt.vmx.aExitMsrLoadArea);
1101 RT_ZERO(pCtx->hwvirt.vmx.abMsrBitmap);
1102 RT_ZERO(pCtx->hwvirt.vmx.abIoBitmap);
1103 RT_ZERO(pCtx->hwvirt.vmx.abVirtApicPage);
1104 }
1105}
1106
1107
1108/**
1109 * Resets per-VCPU VMX hardware virtualization state.
1110 *
1111 * @param pVCpu The cross context virtual CPU structure.
1112 */
1113DECLINLINE(void) cpumR3ResetVmxHwVirtState(PVMCPU pVCpu)
1114{
1115 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1116 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_VMX);
1117
1118 RT_ZERO(pCtx->hwvirt.vmx.Vmcs);
1119 RT_ZERO(pCtx->hwvirt.vmx.ShadowVmcs);
1120 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1121 pCtx->hwvirt.vmx.GCPhysShadowVmcs = NIL_RTGCPHYS;
1122 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1123 pCtx->hwvirt.vmx.fInVmxRootMode = false;
1124 pCtx->hwvirt.vmx.fInVmxNonRootMode = false;
1125 /* Don't reset diagnostics here. */
1126
1127 /* Stop any VMX-preemption timer. */
1128 CPUMStopGuestVmxPremptTimer(pVCpu);
1129
1130 /* Clear all nested-guest FFs. */
1131 VMCPU_FF_CLEAR_MASK(pVCpu, VMCPU_FF_VMX_ALL_MASK);
1132}
1133
1134
1135/**
1136 * Displays the host and guest VMX features.
1137 *
1138 * @param pVM The cross context VM structure.
1139 * @param pHlp The info helper functions.
1140 * @param pszArgs "terse", "default" or "verbose".
1141 */
1142DECLCALLBACK(void) cpumR3InfoVmxFeatures(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1143{
1144 RT_NOREF(pszArgs);
1145 PCCPUMFEATURES pHostFeatures = &pVM->cpum.s.HostFeatures;
1146 PCCPUMFEATURES pGuestFeatures = &pVM->cpum.s.GuestFeatures;
1147 if ( pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_INTEL
1148 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_VIA
1149 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_SHANGHAI)
1150 {
1151#define VMXFEATDUMP(a_szDesc, a_Var) \
1152 pHlp->pfnPrintf(pHlp, " %s = %u (%u)\n", a_szDesc, pGuestFeatures->a_Var, pHostFeatures->a_Var)
1153
1154 pHlp->pfnPrintf(pHlp, "Nested hardware virtualization - VMX features\n");
1155 pHlp->pfnPrintf(pHlp, " Mnemonic - Description = guest (host)\n");
1156 VMXFEATDUMP("VMX - Virtual-Machine Extensions ", fVmx);
1157 /* Basic. */
1158 VMXFEATDUMP("InsOutInfo - INS/OUTS instruction info. ", fVmxInsOutInfo);
1159
1160 /* Pin-based controls. */
1161 VMXFEATDUMP("ExtIntExit - External interrupt exiting ", fVmxExtIntExit);
1162 VMXFEATDUMP("NmiExit - NMI exiting ", fVmxNmiExit);
1163 VMXFEATDUMP("VirtNmi - Virtual NMIs ", fVmxVirtNmi);
1164 VMXFEATDUMP("PreemptTimer - VMX preemption timer ", fVmxPreemptTimer);
1165 VMXFEATDUMP("PostedInt - Posted interrupts ", fVmxPostedInt);
1166
1167 /* Processor-based controls. */
1168 VMXFEATDUMP("IntWindowExit - Interrupt-window exiting ", fVmxIntWindowExit);
1169 VMXFEATDUMP("TscOffsetting - TSC offsetting ", fVmxTscOffsetting);
1170 VMXFEATDUMP("HltExit - HLT exiting ", fVmxHltExit);
1171 VMXFEATDUMP("InvlpgExit - INVLPG exiting ", fVmxInvlpgExit);
1172 VMXFEATDUMP("MwaitExit - MWAIT exiting ", fVmxMwaitExit);
1173 VMXFEATDUMP("RdpmcExit - RDPMC exiting ", fVmxRdpmcExit);
1174 VMXFEATDUMP("RdtscExit - RDTSC exiting ", fVmxRdtscExit);
1175 VMXFEATDUMP("Cr3LoadExit - CR3-load exiting ", fVmxCr3LoadExit);
1176 VMXFEATDUMP("Cr3StoreExit - CR3-store exiting ", fVmxCr3StoreExit);
1177 VMXFEATDUMP("TertiaryExecCtls - Activate tertiary controls ", fVmxTertiaryExecCtls);
1178 VMXFEATDUMP("Cr8LoadExit - CR8-load exiting ", fVmxCr8LoadExit);
1179 VMXFEATDUMP("Cr8StoreExit - CR8-store exiting ", fVmxCr8StoreExit);
1180 VMXFEATDUMP("UseTprShadow - Use TPR shadow ", fVmxUseTprShadow);
1181 VMXFEATDUMP("NmiWindowExit - NMI-window exiting ", fVmxNmiWindowExit);
1182 VMXFEATDUMP("MovDRxExit - Mov-DR exiting ", fVmxMovDRxExit);
1183 VMXFEATDUMP("UncondIoExit - Unconditional I/O exiting ", fVmxUncondIoExit);
1184 VMXFEATDUMP("UseIoBitmaps - Use I/O bitmaps ", fVmxUseIoBitmaps);
1185 VMXFEATDUMP("MonitorTrapFlag - Monitor Trap Flag ", fVmxMonitorTrapFlag);
1186 VMXFEATDUMP("UseMsrBitmaps - MSR bitmaps ", fVmxUseMsrBitmaps);
1187 VMXFEATDUMP("MonitorExit - MONITOR exiting ", fVmxMonitorExit);
1188 VMXFEATDUMP("PauseExit - PAUSE exiting ", fVmxPauseExit);
1189 VMXFEATDUMP("SecondaryExecCtl - Activate secondary controls ", fVmxSecondaryExecCtls);
1190
1191 /* Secondary processor-based controls. */
1192 VMXFEATDUMP("VirtApic - Virtualize-APIC accesses ", fVmxVirtApicAccess);
1193 VMXFEATDUMP("Ept - Extended Page Tables ", fVmxEpt);
1194 VMXFEATDUMP("DescTableExit - Descriptor-table exiting ", fVmxDescTableExit);
1195 VMXFEATDUMP("Rdtscp - Enable RDTSCP ", fVmxRdtscp);
1196 VMXFEATDUMP("VirtX2ApicMode - Virtualize-x2APIC mode ", fVmxVirtX2ApicMode);
1197 VMXFEATDUMP("Vpid - Enable VPID ", fVmxVpid);
1198 VMXFEATDUMP("WbinvdExit - WBINVD exiting ", fVmxWbinvdExit);
1199 VMXFEATDUMP("UnrestrictedGuest - Unrestricted guest ", fVmxUnrestrictedGuest);
1200 VMXFEATDUMP("ApicRegVirt - APIC-register virtualization ", fVmxApicRegVirt);
1201 VMXFEATDUMP("VirtIntDelivery - Virtual-interrupt delivery ", fVmxVirtIntDelivery);
1202 VMXFEATDUMP("PauseLoopExit - PAUSE-loop exiting ", fVmxPauseLoopExit);
1203 VMXFEATDUMP("RdrandExit - RDRAND exiting ", fVmxRdrandExit);
1204 VMXFEATDUMP("Invpcid - Enable INVPCID ", fVmxInvpcid);
1205 VMXFEATDUMP("VmFuncs - Enable VM Functions ", fVmxVmFunc);
1206 VMXFEATDUMP("VmcsShadowing - VMCS shadowing ", fVmxVmcsShadowing);
1207 VMXFEATDUMP("RdseedExiting - RDSEED exiting ", fVmxRdseedExit);
1208 VMXFEATDUMP("PML - Page-Modification Log (PML) ", fVmxPml);
1209 VMXFEATDUMP("EptVe - EPT violations can cause #VE ", fVmxEptXcptVe);
1210 VMXFEATDUMP("ConcealVmxFromPt - Conceal VMX from Processor Trace ", fVmxConcealVmxFromPt);
1211 VMXFEATDUMP("XsavesXRstors - Enable XSAVES/XRSTORS ", fVmxXsavesXrstors);
1212 VMXFEATDUMP("ModeBasedExecuteEpt - Mode-based execute permissions ", fVmxModeBasedExecuteEpt);
1213 VMXFEATDUMP("SppEpt - Sub-page page write permissions for EPT ", fVmxSppEpt);
1214 VMXFEATDUMP("PtEpt - Processor Trace address' translatable by EPT ", fVmxPtEpt);
1215 VMXFEATDUMP("UseTscScaling - Use TSC scaling ", fVmxUseTscScaling);
1216 VMXFEATDUMP("UserWaitPause - Enable TPAUSE, UMONITOR and UMWAIT ", fVmxUserWaitPause);
1217 VMXFEATDUMP("EnclvExit - ENCLV exiting ", fVmxEnclvExit);
1218
1219 /* Tertiary processor-based controls. */
1220 VMXFEATDUMP("LoadIwKeyExit - LOADIWKEY exiting ", fVmxLoadIwKeyExit);
1221
1222 /* VM-entry controls. */
1223 VMXFEATDUMP("EntryLoadDebugCtls - Load debug controls on VM-entry ", fVmxEntryLoadDebugCtls);
1224 VMXFEATDUMP("Ia32eModeGuest - IA-32e mode guest ", fVmxIa32eModeGuest);
1225 VMXFEATDUMP("EntryLoadEferMsr - Load IA32_EFER MSR on VM-entry ", fVmxEntryLoadEferMsr);
1226 VMXFEATDUMP("EntryLoadPatMsr - Load IA32_PAT MSR on VM-entry ", fVmxEntryLoadPatMsr);
1227
1228 /* VM-exit controls. */
1229 VMXFEATDUMP("ExitSaveDebugCtls - Save debug controls on VM-exit ", fVmxExitSaveDebugCtls);
1230 VMXFEATDUMP("HostAddrSpaceSize - Host address-space size ", fVmxHostAddrSpaceSize);
1231 VMXFEATDUMP("ExitAckExtInt - Acknowledge interrupt on VM-exit ", fVmxExitAckExtInt);
1232 VMXFEATDUMP("ExitSavePatMsr - Save IA32_PAT MSR on VM-exit ", fVmxExitSavePatMsr);
1233 VMXFEATDUMP("ExitLoadPatMsr - Load IA32_PAT MSR on VM-exit ", fVmxExitLoadPatMsr);
1234 VMXFEATDUMP("ExitSaveEferMsr - Save IA32_EFER MSR on VM-exit ", fVmxExitSaveEferMsr);
1235 VMXFEATDUMP("ExitLoadEferMsr - Load IA32_EFER MSR on VM-exit ", fVmxExitLoadEferMsr);
1236 VMXFEATDUMP("SavePreemptTimer - Save VMX-preemption timer ", fVmxSavePreemptTimer);
1237
1238 /* Miscellaneous data. */
1239 VMXFEATDUMP("ExitSaveEferLma - Save IA32_EFER.LMA on VM-exit ", fVmxExitSaveEferLma);
1240 VMXFEATDUMP("IntelPt - Intel PT (Processor Trace) in VMX operation ", fVmxPt);
1241 VMXFEATDUMP("VmwriteAll - VMWRITE to any supported VMCS field ", fVmxVmwriteAll);
1242 VMXFEATDUMP("EntryInjectSoftInt - Inject softint. with 0-len instr. ", fVmxEntryInjectSoftInt);
1243#undef VMXFEATDUMP
1244 }
1245 else
1246 pHlp->pfnPrintf(pHlp, "No VMX features present - requires an Intel or compatible CPU.\n");
1247}
1248
1249
1250/**
1251 * Checks whether nested-guest execution using hardware-assisted VMX (e.g, using HM
1252 * or NEM) is allowed.
1253 *
1254 * @returns @c true if hardware-assisted nested-guest execution is allowed, @c false
1255 * otherwise.
1256 * @param pVM The cross context VM structure.
1257 */
1258static bool cpumR3IsHwAssistNstGstExecAllowed(PVM pVM)
1259{
1260 AssertMsg(pVM->bMainExecutionEngine != VM_EXEC_ENGINE_NOT_SET, ("Calling this function too early!\n"));
1261#ifndef VBOX_WITH_NESTED_HWVIRT_ONLY_IN_IEM
1262 if ( pVM->bMainExecutionEngine == VM_EXEC_ENGINE_HW_VIRT
1263 || pVM->bMainExecutionEngine == VM_EXEC_ENGINE_NATIVE_API)
1264 return true;
1265#else
1266 NOREF(pVM);
1267#endif
1268 return false;
1269}
1270
1271
1272/**
1273 * Initializes the VMX guest MSRs from guest CPU features based on the host MSRs.
1274 *
1275 * @param pVM The cross context VM structure.
1276 * @param pHostVmxMsrs The host VMX MSRs. Pass NULL when fully emulating VMX
1277 * and no hardware-assisted nested-guest execution is
1278 * possible for this VM.
1279 * @param pGuestFeatures The guest features to use (only VMX features are
1280 * accessed).
1281 * @param pGuestVmxMsrs Where to store the initialized guest VMX MSRs.
1282 *
1283 * @remarks This function ASSUMES the VMX guest-features are already exploded!
1284 */
1285static void cpumR3InitVmxGuestMsrs(PVM pVM, PCVMXMSRS pHostVmxMsrs, PCCPUMFEATURES pGuestFeatures, PVMXMSRS pGuestVmxMsrs)
1286{
1287 bool const fIsNstGstHwExecAllowed = cpumR3IsHwAssistNstGstExecAllowed(pVM);
1288
1289 Assert(!fIsNstGstHwExecAllowed || pHostVmxMsrs);
1290 Assert(pGuestFeatures->fVmx);
1291
1292 /*
1293 * We don't support the following MSRs yet:
1294 * - True Pin-based VM-execution controls.
1295 * - True Processor-based VM-execution controls.
1296 * - True VM-entry VM-execution controls.
1297 * - True VM-exit VM-execution controls.
1298 */
1299
1300 /* Basic information. */
1301 uint8_t const fTrueVmxMsrs = 1;
1302 {
1303 uint64_t const u64Basic = RT_BF_MAKE(VMX_BF_BASIC_VMCS_ID, VMX_V_VMCS_REVISION_ID )
1304 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_SIZE, VMX_V_VMCS_SIZE )
1305 | RT_BF_MAKE(VMX_BF_BASIC_PHYSADDR_WIDTH, !pGuestFeatures->fLongMode )
1306 | RT_BF_MAKE(VMX_BF_BASIC_DUAL_MON, 0 )
1307 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_MEM_TYPE, VMX_BASIC_MEM_TYPE_WB )
1308 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_INS_OUTS, pGuestFeatures->fVmxInsOutInfo)
1309 | RT_BF_MAKE(VMX_BF_BASIC_TRUE_CTLS, fTrueVmxMsrs );
1310 pGuestVmxMsrs->u64Basic = u64Basic;
1311 }
1312
1313 /* Pin-based VM-execution controls. */
1314 {
1315 uint32_t const fFeatures = (pGuestFeatures->fVmxExtIntExit << VMX_BF_PIN_CTLS_EXT_INT_EXIT_SHIFT )
1316 | (pGuestFeatures->fVmxNmiExit << VMX_BF_PIN_CTLS_NMI_EXIT_SHIFT )
1317 | (pGuestFeatures->fVmxVirtNmi << VMX_BF_PIN_CTLS_VIRT_NMI_SHIFT )
1318 | (pGuestFeatures->fVmxPreemptTimer << VMX_BF_PIN_CTLS_PREEMPT_TIMER_SHIFT)
1319 | (pGuestFeatures->fVmxPostedInt << VMX_BF_PIN_CTLS_POSTED_INT_SHIFT );
1320 uint32_t const fAllowed0 = VMX_PIN_CTLS_DEFAULT1;
1321 uint32_t const fAllowed1 = fFeatures | VMX_PIN_CTLS_DEFAULT1;
1322 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n",
1323 fAllowed0, fAllowed1, fFeatures));
1324 pGuestVmxMsrs->PinCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1325
1326 /* True pin-based VM-execution controls. */
1327 if (fTrueVmxMsrs)
1328 {
1329 /* VMX_PIN_CTLS_DEFAULT1 contains MB1 reserved bits and must be reserved MB1 in true pin-based controls as well. */
1330 pGuestVmxMsrs->TruePinCtls.u = pGuestVmxMsrs->PinCtls.u;
1331 }
1332 }
1333
1334 /* Processor-based VM-execution controls. */
1335 {
1336 uint32_t const fFeatures = (pGuestFeatures->fVmxIntWindowExit << VMX_BF_PROC_CTLS_INT_WINDOW_EXIT_SHIFT )
1337 | (pGuestFeatures->fVmxTscOffsetting << VMX_BF_PROC_CTLS_USE_TSC_OFFSETTING_SHIFT)
1338 | (pGuestFeatures->fVmxHltExit << VMX_BF_PROC_CTLS_HLT_EXIT_SHIFT )
1339 | (pGuestFeatures->fVmxInvlpgExit << VMX_BF_PROC_CTLS_INVLPG_EXIT_SHIFT )
1340 | (pGuestFeatures->fVmxMwaitExit << VMX_BF_PROC_CTLS_MWAIT_EXIT_SHIFT )
1341 | (pGuestFeatures->fVmxRdpmcExit << VMX_BF_PROC_CTLS_RDPMC_EXIT_SHIFT )
1342 | (pGuestFeatures->fVmxRdtscExit << VMX_BF_PROC_CTLS_RDTSC_EXIT_SHIFT )
1343 | (pGuestFeatures->fVmxCr3LoadExit << VMX_BF_PROC_CTLS_CR3_LOAD_EXIT_SHIFT )
1344 | (pGuestFeatures->fVmxCr3StoreExit << VMX_BF_PROC_CTLS_CR3_STORE_EXIT_SHIFT )
1345 | (pGuestFeatures->fVmxTertiaryExecCtls << VMX_BF_PROC_CTLS_USE_TERTIARY_CTLS_SHIFT )
1346 | (pGuestFeatures->fVmxCr8LoadExit << VMX_BF_PROC_CTLS_CR8_LOAD_EXIT_SHIFT )
1347 | (pGuestFeatures->fVmxCr8StoreExit << VMX_BF_PROC_CTLS_CR8_STORE_EXIT_SHIFT )
1348 | (pGuestFeatures->fVmxUseTprShadow << VMX_BF_PROC_CTLS_USE_TPR_SHADOW_SHIFT )
1349 | (pGuestFeatures->fVmxNmiWindowExit << VMX_BF_PROC_CTLS_NMI_WINDOW_EXIT_SHIFT )
1350 | (pGuestFeatures->fVmxMovDRxExit << VMX_BF_PROC_CTLS_MOV_DR_EXIT_SHIFT )
1351 | (pGuestFeatures->fVmxUncondIoExit << VMX_BF_PROC_CTLS_UNCOND_IO_EXIT_SHIFT )
1352 | (pGuestFeatures->fVmxUseIoBitmaps << VMX_BF_PROC_CTLS_USE_IO_BITMAPS_SHIFT )
1353 | (pGuestFeatures->fVmxMonitorTrapFlag << VMX_BF_PROC_CTLS_MONITOR_TRAP_FLAG_SHIFT )
1354 | (pGuestFeatures->fVmxUseMsrBitmaps << VMX_BF_PROC_CTLS_USE_MSR_BITMAPS_SHIFT )
1355 | (pGuestFeatures->fVmxMonitorExit << VMX_BF_PROC_CTLS_MONITOR_EXIT_SHIFT )
1356 | (pGuestFeatures->fVmxPauseExit << VMX_BF_PROC_CTLS_PAUSE_EXIT_SHIFT )
1357 | (pGuestFeatures->fVmxSecondaryExecCtls << VMX_BF_PROC_CTLS_USE_SECONDARY_CTLS_SHIFT);
1358 uint32_t const fAllowed0 = VMX_PROC_CTLS_DEFAULT1;
1359 uint32_t const fAllowed1 = fFeatures | VMX_PROC_CTLS_DEFAULT1;
1360 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1361 fAllowed1, fFeatures));
1362 pGuestVmxMsrs->ProcCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1363
1364 /* True processor-based VM-execution controls. */
1365 if (fTrueVmxMsrs)
1366 {
1367 /* VMX_PROC_CTLS_DEFAULT1 contains MB1 reserved bits but the following are not really reserved. */
1368 uint32_t const fTrueAllowed0 = VMX_PROC_CTLS_DEFAULT1 & ~( VMX_BF_PROC_CTLS_CR3_LOAD_EXIT_MASK
1369 | VMX_BF_PROC_CTLS_CR3_STORE_EXIT_MASK);
1370 uint32_t const fTrueAllowed1 = fFeatures | fTrueAllowed0;
1371 pGuestVmxMsrs->TrueProcCtls.u = RT_MAKE_U64(fTrueAllowed0, fTrueAllowed1);
1372 }
1373 }
1374
1375 /* Secondary processor-based VM-execution controls. */
1376 if (pGuestFeatures->fVmxSecondaryExecCtls)
1377 {
1378 uint32_t const fFeatures = (pGuestFeatures->fVmxVirtApicAccess << VMX_BF_PROC_CTLS2_VIRT_APIC_ACCESS_SHIFT )
1379 | (pGuestFeatures->fVmxEpt << VMX_BF_PROC_CTLS2_EPT_SHIFT )
1380 | (pGuestFeatures->fVmxDescTableExit << VMX_BF_PROC_CTLS2_DESC_TABLE_EXIT_SHIFT )
1381 | (pGuestFeatures->fVmxRdtscp << VMX_BF_PROC_CTLS2_RDTSCP_SHIFT )
1382 | (pGuestFeatures->fVmxVirtX2ApicMode << VMX_BF_PROC_CTLS2_VIRT_X2APIC_MODE_SHIFT )
1383 | (pGuestFeatures->fVmxVpid << VMX_BF_PROC_CTLS2_VPID_SHIFT )
1384 | (pGuestFeatures->fVmxWbinvdExit << VMX_BF_PROC_CTLS2_WBINVD_EXIT_SHIFT )
1385 | (pGuestFeatures->fVmxUnrestrictedGuest << VMX_BF_PROC_CTLS2_UNRESTRICTED_GUEST_SHIFT )
1386 | (pGuestFeatures->fVmxApicRegVirt << VMX_BF_PROC_CTLS2_APIC_REG_VIRT_SHIFT )
1387 | (pGuestFeatures->fVmxVirtIntDelivery << VMX_BF_PROC_CTLS2_VIRT_INT_DELIVERY_SHIFT )
1388 | (pGuestFeatures->fVmxPauseLoopExit << VMX_BF_PROC_CTLS2_PAUSE_LOOP_EXIT_SHIFT )
1389 | (pGuestFeatures->fVmxRdrandExit << VMX_BF_PROC_CTLS2_RDRAND_EXIT_SHIFT )
1390 | (pGuestFeatures->fVmxInvpcid << VMX_BF_PROC_CTLS2_INVPCID_SHIFT )
1391 | (pGuestFeatures->fVmxVmFunc << VMX_BF_PROC_CTLS2_VMFUNC_SHIFT )
1392 | (pGuestFeatures->fVmxVmcsShadowing << VMX_BF_PROC_CTLS2_VMCS_SHADOWING_SHIFT )
1393 | (pGuestFeatures->fVmxRdseedExit << VMX_BF_PROC_CTLS2_RDSEED_EXIT_SHIFT )
1394 | (pGuestFeatures->fVmxPml << VMX_BF_PROC_CTLS2_PML_SHIFT )
1395 | (pGuestFeatures->fVmxEptXcptVe << VMX_BF_PROC_CTLS2_EPT_VE_SHIFT )
1396 | (pGuestFeatures->fVmxConcealVmxFromPt << VMX_BF_PROC_CTLS2_CONCEAL_VMX_FROM_PT_SHIFT)
1397 | (pGuestFeatures->fVmxXsavesXrstors << VMX_BF_PROC_CTLS2_XSAVES_XRSTORS_SHIFT )
1398 | (pGuestFeatures->fVmxModeBasedExecuteEpt << VMX_BF_PROC_CTLS2_MODE_BASED_EPT_PERM_SHIFT)
1399 | (pGuestFeatures->fVmxSppEpt << VMX_BF_PROC_CTLS2_SPP_EPT_SHIFT )
1400 | (pGuestFeatures->fVmxPtEpt << VMX_BF_PROC_CTLS2_PT_EPT_SHIFT )
1401 | (pGuestFeatures->fVmxUseTscScaling << VMX_BF_PROC_CTLS2_TSC_SCALING_SHIFT )
1402 | (pGuestFeatures->fVmxUserWaitPause << VMX_BF_PROC_CTLS2_USER_WAIT_PAUSE_SHIFT )
1403 | (pGuestFeatures->fVmxEnclvExit << VMX_BF_PROC_CTLS2_ENCLV_EXIT_SHIFT );
1404 uint32_t const fAllowed0 = 0;
1405 uint32_t const fAllowed1 = fFeatures;
1406 pGuestVmxMsrs->ProcCtls2.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1407 }
1408
1409 /* Tertiary processor-based VM-execution controls. */
1410 if (pGuestFeatures->fVmxTertiaryExecCtls)
1411 {
1412 pGuestVmxMsrs->u64ProcCtls3 = (pGuestFeatures->fVmxLoadIwKeyExit << VMX_BF_PROC_CTLS3_LOADIWKEY_EXIT_SHIFT);
1413 }
1414
1415 /* VM-exit controls. */
1416 {
1417 uint32_t const fFeatures = (pGuestFeatures->fVmxExitSaveDebugCtls << VMX_BF_EXIT_CTLS_SAVE_DEBUG_SHIFT )
1418 | (pGuestFeatures->fVmxHostAddrSpaceSize << VMX_BF_EXIT_CTLS_HOST_ADDR_SPACE_SIZE_SHIFT)
1419 | (pGuestFeatures->fVmxExitAckExtInt << VMX_BF_EXIT_CTLS_ACK_EXT_INT_SHIFT )
1420 | (pGuestFeatures->fVmxExitSavePatMsr << VMX_BF_EXIT_CTLS_SAVE_PAT_MSR_SHIFT )
1421 | (pGuestFeatures->fVmxExitLoadPatMsr << VMX_BF_EXIT_CTLS_LOAD_PAT_MSR_SHIFT )
1422 | (pGuestFeatures->fVmxExitSaveEferMsr << VMX_BF_EXIT_CTLS_SAVE_EFER_MSR_SHIFT )
1423 | (pGuestFeatures->fVmxExitLoadEferMsr << VMX_BF_EXIT_CTLS_LOAD_EFER_MSR_SHIFT )
1424 | (pGuestFeatures->fVmxSavePreemptTimer << VMX_BF_EXIT_CTLS_SAVE_PREEMPT_TIMER_SHIFT );
1425 /* Set the default1 class bits. See Intel spec. A.4 "VM-exit Controls". */
1426 uint32_t const fAllowed0 = VMX_EXIT_CTLS_DEFAULT1;
1427 uint32_t const fAllowed1 = fFeatures | VMX_EXIT_CTLS_DEFAULT1;
1428 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1429 fAllowed1, fFeatures));
1430 pGuestVmxMsrs->ExitCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1431
1432 /* True VM-exit controls. */
1433 if (fTrueVmxMsrs)
1434 {
1435 /* VMX_EXIT_CTLS_DEFAULT1 contains MB1 reserved bits but the following are not really reserved */
1436 uint32_t const fTrueAllowed0 = VMX_EXIT_CTLS_DEFAULT1 & ~VMX_BF_EXIT_CTLS_SAVE_DEBUG_MASK;
1437 uint32_t const fTrueAllowed1 = fFeatures | fTrueAllowed0;
1438 pGuestVmxMsrs->TrueExitCtls.u = RT_MAKE_U64(fTrueAllowed0, fTrueAllowed1);
1439 }
1440 }
1441
1442 /* VM-entry controls. */
1443 {
1444 uint32_t const fFeatures = (pGuestFeatures->fVmxEntryLoadDebugCtls << VMX_BF_ENTRY_CTLS_LOAD_DEBUG_SHIFT )
1445 | (pGuestFeatures->fVmxIa32eModeGuest << VMX_BF_ENTRY_CTLS_IA32E_MODE_GUEST_SHIFT)
1446 | (pGuestFeatures->fVmxEntryLoadEferMsr << VMX_BF_ENTRY_CTLS_LOAD_EFER_MSR_SHIFT )
1447 | (pGuestFeatures->fVmxEntryLoadPatMsr << VMX_BF_ENTRY_CTLS_LOAD_PAT_MSR_SHIFT );
1448 uint32_t const fAllowed0 = VMX_ENTRY_CTLS_DEFAULT1;
1449 uint32_t const fAllowed1 = fFeatures | VMX_ENTRY_CTLS_DEFAULT1;
1450 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed0=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1451 fAllowed1, fFeatures));
1452 pGuestVmxMsrs->EntryCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1453
1454 /* True VM-entry controls. */
1455 if (fTrueVmxMsrs)
1456 {
1457 /* VMX_ENTRY_CTLS_DEFAULT1 contains MB1 reserved bits but the following are not really reserved */
1458 uint32_t const fTrueAllowed0 = VMX_ENTRY_CTLS_DEFAULT1 & ~( VMX_BF_ENTRY_CTLS_LOAD_DEBUG_MASK
1459 | VMX_BF_ENTRY_CTLS_IA32E_MODE_GUEST_MASK
1460 | VMX_BF_ENTRY_CTLS_ENTRY_SMM_MASK
1461 | VMX_BF_ENTRY_CTLS_DEACTIVATE_DUAL_MON_MASK);
1462 uint32_t const fTrueAllowed1 = fFeatures | fTrueAllowed0;
1463 pGuestVmxMsrs->TrueEntryCtls.u = RT_MAKE_U64(fTrueAllowed0, fTrueAllowed1);
1464 }
1465 }
1466
1467 /* Miscellaneous data. */
1468 {
1469 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Misc : 0;
1470
1471 uint8_t const cMaxMsrs = RT_MIN(RT_BF_GET(uHostMsr, VMX_BF_MISC_MAX_MSRS), VMX_V_AUTOMSR_COUNT_MAX);
1472 uint8_t const fActivityState = RT_BF_GET(uHostMsr, VMX_BF_MISC_ACTIVITY_STATES) & VMX_V_GUEST_ACTIVITY_STATE_MASK;
1473 pGuestVmxMsrs->u64Misc = RT_BF_MAKE(VMX_BF_MISC_PREEMPT_TIMER_TSC, VMX_V_PREEMPT_TIMER_SHIFT )
1474 | RT_BF_MAKE(VMX_BF_MISC_EXIT_SAVE_EFER_LMA, pGuestFeatures->fVmxExitSaveEferLma )
1475 | RT_BF_MAKE(VMX_BF_MISC_ACTIVITY_STATES, fActivityState )
1476 | RT_BF_MAKE(VMX_BF_MISC_INTEL_PT, pGuestFeatures->fVmxPt )
1477 | RT_BF_MAKE(VMX_BF_MISC_SMM_READ_SMBASE_MSR, 0 )
1478 | RT_BF_MAKE(VMX_BF_MISC_CR3_TARGET, VMX_V_CR3_TARGET_COUNT )
1479 | RT_BF_MAKE(VMX_BF_MISC_MAX_MSRS, cMaxMsrs )
1480 | RT_BF_MAKE(VMX_BF_MISC_VMXOFF_BLOCK_SMI, 0 )
1481 | RT_BF_MAKE(VMX_BF_MISC_VMWRITE_ALL, pGuestFeatures->fVmxVmwriteAll )
1482 | RT_BF_MAKE(VMX_BF_MISC_ENTRY_INJECT_SOFT_INT, pGuestFeatures->fVmxEntryInjectSoftInt)
1483 | RT_BF_MAKE(VMX_BF_MISC_MSEG_ID, VMX_V_MSEG_REV_ID );
1484 }
1485
1486 /* CR0 Fixed-0 (we report this fixed value regardless of whether UX is supported as it does on real hardware). */
1487 pGuestVmxMsrs->u64Cr0Fixed0 = VMX_V_CR0_FIXED0;
1488
1489 /* CR0 Fixed-1. */
1490 {
1491 /*
1492 * All CPUs I've looked at so far report CR0 fixed-1 bits as 0xffffffff.
1493 * This is different from CR4 fixed-1 bits which are reported as per the
1494 * CPU features and/or micro-architecture/generation. Why? Ask Intel.
1495 */
1496 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Cr0Fixed1 : VMX_V_CR0_FIXED1;
1497 pGuestVmxMsrs->u64Cr0Fixed1 = uHostMsr | pGuestVmxMsrs->u64Cr0Fixed0; /* Make sure the CR0 MB1 bits are not clear. */
1498 }
1499
1500 /* CR4 Fixed-0. */
1501 pGuestVmxMsrs->u64Cr4Fixed0 = VMX_V_CR4_FIXED0;
1502
1503 /* CR4 Fixed-1. */
1504 {
1505 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Cr4Fixed1 : CPUMGetGuestCR4ValidMask(pVM);
1506 pGuestVmxMsrs->u64Cr4Fixed1 = uHostMsr | pGuestVmxMsrs->u64Cr4Fixed0; /* Make sure the CR4 MB1 bits are not clear. */
1507 }
1508
1509 /* VMCS Enumeration. */
1510 pGuestVmxMsrs->u64VmcsEnum = VMX_V_VMCS_MAX_INDEX << VMX_BF_VMCS_ENUM_HIGHEST_IDX_SHIFT;
1511
1512 /* VPID and EPT Capabilities. */
1513 if (pGuestFeatures->fVmxEpt)
1514 {
1515 /*
1516 * INVVPID instruction always causes a VM-exit unconditionally, so we are free to fake
1517 * and emulate any INVVPID flush type. However, it only makes sense to expose the types
1518 * when INVVPID instruction is supported just to be more compatible with guest
1519 * hypervisors that may make assumptions by only looking at this MSR even though they
1520 * are technically supposed to refer to VMX_PROC_CTLS2_VPID first.
1521 *
1522 * See Intel spec. 25.1.2 "Instructions That Cause VM Exits Unconditionally".
1523 * See Intel spec. 30.3 "VMX Instructions".
1524 */
1525 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64EptVpidCaps : UINT64_MAX;
1526 uint8_t const fVpid = pGuestFeatures->fVmxVpid;
1527
1528 uint8_t const fExecOnly = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_EXEC_ONLY);
1529 uint8_t const fPml4 = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_PAGE_WALK_LENGTH_4);
1530 uint8_t const fMemTypeUc = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_MEMTYPE_UC);
1531 uint8_t const fMemTypeWb = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_MEMTYPE_WB);
1532 uint8_t const f2MPage = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_PDE_2M);
1533 uint8_t const f1GPage = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_PDPTE_1G);
1534 uint8_t const fInvept = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVEPT);
1535 /** @todo Nested VMX: Support accessed/dirty bits, see @bugref{10092#c25}. */
1536 /* uint8_t const fAccessDirty = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_ACCESS_DIRTY); */
1537 uint8_t const fEptSingle = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVEPT_SINGLE_CTX);
1538 uint8_t const fEptAll = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVEPT_ALL_CTX);
1539 uint8_t const fVpidIndiv = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_INDIV_ADDR);
1540 uint8_t const fVpidSingle = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX);
1541 uint8_t const fVpidAll = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_ALL_CTX);
1542 uint8_t const fVpidSingleGlobal = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX_RETAIN_GLOBALS);
1543 pGuestVmxMsrs->u64EptVpidCaps = RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_EXEC_ONLY, fExecOnly)
1544 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_PAGE_WALK_LENGTH_4, fPml4)
1545 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_MEMTYPE_UC, fMemTypeUc)
1546 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_MEMTYPE_WB, fMemTypeWb)
1547 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_PDE_2M, f2MPage)
1548 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_PDPTE_1G, f1GPage)
1549 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVEPT, fInvept)
1550 //| RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_ACCESS_DIRTY, 0)
1551 //| RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_ADVEXITINFO_EPT_VIOLATION, 0)
1552 //| RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_SUPER_SHW_STACK, 0)
1553 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVEPT_SINGLE_CTX, fEptSingle)
1554 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVEPT_ALL_CTX, fEptAll)
1555 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID, fVpid)
1556 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_INDIV_ADDR, fVpid & fVpidIndiv)
1557 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX, fVpid & fVpidSingle)
1558 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_ALL_CTX, fVpid & fVpidAll)
1559 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX_RETAIN_GLOBALS, fVpid & fVpidSingleGlobal);
1560 }
1561
1562 /* VM Functions. */
1563 if (pGuestFeatures->fVmxVmFunc)
1564 pGuestVmxMsrs->u64VmFunc = RT_BF_MAKE(VMX_BF_VMFUNC_EPTP_SWITCHING, 1);
1565}
1566
1567
1568/**
1569 * Checks whether the given guest CPU VMX features are compatible with the provided
1570 * base features.
1571 *
1572 * @returns @c true if compatible, @c false otherwise.
1573 * @param pVM The cross context VM structure.
1574 * @param pBase The base VMX CPU features.
1575 * @param pGst The guest VMX CPU features.
1576 *
1577 * @remarks Only VMX feature bits are examined.
1578 */
1579static bool cpumR3AreVmxCpuFeaturesCompatible(PVM pVM, PCCPUMFEATURES pBase, PCCPUMFEATURES pGst)
1580{
1581 if (!cpumR3IsHwAssistNstGstExecAllowed(pVM))
1582 return false;
1583
1584#define CPUM_VMX_FEAT_SHIFT(a_pFeat, a_FeatName, a_cShift) ((uint64_t)(a_pFeat->a_FeatName) << (a_cShift))
1585#define CPUM_VMX_MAKE_FEATURES_1(a_pFeat) ( CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxInsOutInfo , 0) \
1586 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExtIntExit , 1) \
1587 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxNmiExit , 2) \
1588 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtNmi , 3) \
1589 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPreemptTimer , 4) \
1590 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPostedInt , 5) \
1591 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxIntWindowExit , 6) \
1592 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxTscOffsetting , 7) \
1593 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxHltExit , 8) \
1594 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxInvlpgExit , 9) \
1595 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMwaitExit , 10) \
1596 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdpmcExit , 12) \
1597 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdtscExit , 13) \
1598 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr3LoadExit , 14) \
1599 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr3StoreExit , 15) \
1600 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxTertiaryExecCtls , 16) \
1601 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr8LoadExit , 17) \
1602 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr8StoreExit , 18) \
1603 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseTprShadow , 19) \
1604 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxNmiWindowExit , 20) \
1605 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMovDRxExit , 21) \
1606 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUncondIoExit , 22) \
1607 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseIoBitmaps , 23) \
1608 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMonitorTrapFlag , 24) \
1609 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseMsrBitmaps , 25) \
1610 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMonitorExit , 26) \
1611 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPauseExit , 27) \
1612 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxSecondaryExecCtls , 28) \
1613 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtApicAccess , 29) \
1614 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEpt , 30) \
1615 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxDescTableExit , 31) \
1616 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdtscp , 32) \
1617 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtX2ApicMode , 33) \
1618 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVpid , 34) \
1619 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxWbinvdExit , 35) \
1620 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUnrestrictedGuest , 36) \
1621 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxApicRegVirt , 37) \
1622 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtIntDelivery , 38) \
1623 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPauseLoopExit , 39) \
1624 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdrandExit , 40) \
1625 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxInvpcid , 41) \
1626 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVmFunc , 42) \
1627 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVmcsShadowing , 43) \
1628 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdseedExit , 44) \
1629 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPml , 45) \
1630 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEptXcptVe , 46) \
1631 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxConcealVmxFromPt , 47) \
1632 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxXsavesXrstors , 48) \
1633 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxModeBasedExecuteEpt, 49) \
1634 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxSppEpt , 50) \
1635 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPtEpt , 51) \
1636 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseTscScaling , 52) \
1637 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUserWaitPause , 53) \
1638 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEnclvExit , 54) \
1639 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxLoadIwKeyExit , 55) \
1640 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryLoadDebugCtls , 56) \
1641 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxIa32eModeGuest , 57) \
1642 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryLoadEferMsr , 58) \
1643 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryLoadPatMsr , 59) \
1644 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSaveDebugCtls , 60) \
1645 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxHostAddrSpaceSize , 61) \
1646 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitAckExtInt , 62) \
1647 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSavePatMsr , 63))
1648
1649#define CPUM_VMX_MAKE_FEATURES_2(a_pFeat) ( CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitLoadPatMsr , 0) \
1650 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSaveEferMsr , 1) \
1651 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitLoadEferMsr , 2) \
1652 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxSavePreemptTimer , 3) \
1653 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSaveEferLma , 4) \
1654 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPt , 5) \
1655 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVmwriteAll , 6) \
1656 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryInjectSoftInt , 7))
1657
1658 /* Check first set of feature bits. */
1659 {
1660 uint64_t const fBase = CPUM_VMX_MAKE_FEATURES_1(pBase);
1661 uint64_t const fGst = CPUM_VMX_MAKE_FEATURES_1(pGst);
1662 if ((fBase | fGst) != fBase)
1663 {
1664 uint64_t const fDiff = fBase ^ fGst;
1665 LogRel(("CPUM: VMX features (1) now exposed to the guest are incompatible with those from the saved state. fBase=%#RX64 fGst=%#RX64 fDiff=%#RX64\n",
1666 fBase, fGst, fDiff));
1667 return false;
1668 }
1669 }
1670
1671 /* Check second set of feature bits. */
1672 {
1673 uint64_t const fBase = CPUM_VMX_MAKE_FEATURES_2(pBase);
1674 uint64_t const fGst = CPUM_VMX_MAKE_FEATURES_2(pGst);
1675 if ((fBase | fGst) != fBase)
1676 {
1677 uint64_t const fDiff = fBase ^ fGst;
1678 LogRel(("CPUM: VMX features (2) now exposed to the guest are incompatible with those from the saved state. fBase=%#RX64 fGst=%#RX64 fDiff=%#RX64\n",
1679 fBase, fGst, fDiff));
1680 return false;
1681 }
1682 }
1683#undef CPUM_VMX_FEAT_SHIFT
1684#undef CPUM_VMX_MAKE_FEATURES_1
1685#undef CPUM_VMX_MAKE_FEATURES_2
1686
1687 return true;
1688}
1689
1690
1691/**
1692 * Initializes VMX guest features and MSRs.
1693 *
1694 * @param pVM The cross context VM structure.
1695 * @param pHostVmxMsrs The host VMX MSRs. Pass NULL when fully emulating VMX
1696 * and no hardware-assisted nested-guest execution is
1697 * possible for this VM.
1698 * @param pGuestVmxMsrs Where to store the initialized guest VMX MSRs.
1699 */
1700void cpumR3InitVmxGuestFeaturesAndMsrs(PVM pVM, PCVMXMSRS pHostVmxMsrs, PVMXMSRS pGuestVmxMsrs)
1701{
1702 Assert(pVM);
1703 Assert(pGuestVmxMsrs);
1704
1705 /*
1706 * While it would be nice to check this earlier while initializing fNestedVmxEpt
1707 * but we would not have enumearted host features then, so do it at least now.
1708 */
1709 if ( !pVM->cpum.s.HostFeatures.fNoExecute
1710 && pVM->cpum.s.fNestedVmxEpt)
1711 {
1712 LogRel(("CPUM: Warning! EPT not exposed to the guest since NX isn't available on the host.\n"));
1713 pVM->cpum.s.fNestedVmxEpt = false;
1714 pVM->cpum.s.fNestedVmxUnrestrictedGuest = false;
1715 }
1716
1717 /*
1718 * Initialize the set of VMX features we emulate.
1719 *
1720 * Note! Some bits might be reported as 1 always if they fall under the
1721 * default1 class bits (e.g. fVmxEntryLoadDebugCtls), see @bugref{9180#c5}.
1722 */
1723 CPUMFEATURES EmuFeat;
1724 RT_ZERO(EmuFeat);
1725 EmuFeat.fVmx = 1;
1726 EmuFeat.fVmxInsOutInfo = 1;
1727 EmuFeat.fVmxExtIntExit = 1;
1728 EmuFeat.fVmxNmiExit = 1;
1729 EmuFeat.fVmxVirtNmi = 1;
1730 EmuFeat.fVmxPreemptTimer = pVM->cpum.s.fNestedVmxPreemptTimer;
1731 EmuFeat.fVmxPostedInt = 0;
1732 EmuFeat.fVmxIntWindowExit = 1;
1733 EmuFeat.fVmxTscOffsetting = 1;
1734 EmuFeat.fVmxHltExit = 1;
1735 EmuFeat.fVmxInvlpgExit = 1;
1736 EmuFeat.fVmxMwaitExit = 1;
1737 EmuFeat.fVmxRdpmcExit = 1;
1738 EmuFeat.fVmxRdtscExit = 1;
1739 EmuFeat.fVmxCr3LoadExit = 1;
1740 EmuFeat.fVmxCr3StoreExit = 1;
1741 EmuFeat.fVmxTertiaryExecCtls = 0;
1742 EmuFeat.fVmxCr8LoadExit = 1;
1743 EmuFeat.fVmxCr8StoreExit = 1;
1744 EmuFeat.fVmxUseTprShadow = 1;
1745 EmuFeat.fVmxNmiWindowExit = 0;
1746 EmuFeat.fVmxMovDRxExit = 1;
1747 EmuFeat.fVmxUncondIoExit = 1;
1748 EmuFeat.fVmxUseIoBitmaps = 1;
1749 EmuFeat.fVmxMonitorTrapFlag = 0;
1750 EmuFeat.fVmxUseMsrBitmaps = 1;
1751 EmuFeat.fVmxMonitorExit = 1;
1752 EmuFeat.fVmxPauseExit = 1;
1753 EmuFeat.fVmxSecondaryExecCtls = 1;
1754 EmuFeat.fVmxVirtApicAccess = 1;
1755 EmuFeat.fVmxEpt = pVM->cpum.s.fNestedVmxEpt;
1756 EmuFeat.fVmxDescTableExit = 1;
1757 EmuFeat.fVmxRdtscp = 1;
1758 EmuFeat.fVmxVirtX2ApicMode = 0;
1759 EmuFeat.fVmxVpid = 0; /** @todo Consider enabling this when EPT works. */
1760 EmuFeat.fVmxWbinvdExit = 1;
1761 EmuFeat.fVmxUnrestrictedGuest = pVM->cpum.s.fNestedVmxUnrestrictedGuest;
1762 EmuFeat.fVmxApicRegVirt = 0;
1763 EmuFeat.fVmxVirtIntDelivery = 0;
1764 EmuFeat.fVmxPauseLoopExit = 0;
1765 EmuFeat.fVmxRdrandExit = 0;
1766 EmuFeat.fVmxInvpcid = 1;
1767 EmuFeat.fVmxVmFunc = 0;
1768 EmuFeat.fVmxVmcsShadowing = 0;
1769 EmuFeat.fVmxRdseedExit = 0;
1770 EmuFeat.fVmxPml = 0;
1771 EmuFeat.fVmxEptXcptVe = 0;
1772 EmuFeat.fVmxConcealVmxFromPt = 0;
1773 EmuFeat.fVmxXsavesXrstors = 0;
1774 EmuFeat.fVmxModeBasedExecuteEpt = 0;
1775 EmuFeat.fVmxSppEpt = 0;
1776 EmuFeat.fVmxPtEpt = 0;
1777 EmuFeat.fVmxUseTscScaling = 0;
1778 EmuFeat.fVmxUserWaitPause = 0;
1779 EmuFeat.fVmxEnclvExit = 0;
1780 EmuFeat.fVmxLoadIwKeyExit = 0;
1781 EmuFeat.fVmxEntryLoadDebugCtls = 1;
1782 EmuFeat.fVmxIa32eModeGuest = 1;
1783 EmuFeat.fVmxEntryLoadEferMsr = 1;
1784 EmuFeat.fVmxEntryLoadPatMsr = 0;
1785 EmuFeat.fVmxExitSaveDebugCtls = 1;
1786 EmuFeat.fVmxHostAddrSpaceSize = 1;
1787 EmuFeat.fVmxExitAckExtInt = 1;
1788 EmuFeat.fVmxExitSavePatMsr = 0;
1789 EmuFeat.fVmxExitLoadPatMsr = 0;
1790 EmuFeat.fVmxExitSaveEferMsr = 1;
1791 EmuFeat.fVmxExitLoadEferMsr = 1;
1792 EmuFeat.fVmxSavePreemptTimer = 0; /* Cannot be enabled if VMX-preemption timer is disabled. */
1793 EmuFeat.fVmxExitSaveEferLma = 1; /* Cannot be disabled if unrestricted guest is enabled. */
1794 EmuFeat.fVmxPt = 0;
1795 EmuFeat.fVmxVmwriteAll = 0; /** @todo NSTVMX: enable this when nested VMCS shadowing is enabled. */
1796 EmuFeat.fVmxEntryInjectSoftInt = 1;
1797
1798 /*
1799 * Merge guest features.
1800 *
1801 * When hardware-assisted VMX may be used, any feature we emulate must also be supported
1802 * by the hardware, hence we merge our emulated features with the host features below.
1803 */
1804 PCCPUMFEATURES pBaseFeat = cpumR3IsHwAssistNstGstExecAllowed(pVM) ? &pVM->cpum.s.HostFeatures : &EmuFeat;
1805 PCPUMFEATURES pGuestFeat = &pVM->cpum.s.GuestFeatures;
1806 Assert(pBaseFeat->fVmx);
1807 pGuestFeat->fVmxInsOutInfo = (pBaseFeat->fVmxInsOutInfo & EmuFeat.fVmxInsOutInfo );
1808 pGuestFeat->fVmxExtIntExit = (pBaseFeat->fVmxExtIntExit & EmuFeat.fVmxExtIntExit );
1809 pGuestFeat->fVmxNmiExit = (pBaseFeat->fVmxNmiExit & EmuFeat.fVmxNmiExit );
1810 pGuestFeat->fVmxVirtNmi = (pBaseFeat->fVmxVirtNmi & EmuFeat.fVmxVirtNmi );
1811 pGuestFeat->fVmxPreemptTimer = (pBaseFeat->fVmxPreemptTimer & EmuFeat.fVmxPreemptTimer );
1812 pGuestFeat->fVmxPostedInt = (pBaseFeat->fVmxPostedInt & EmuFeat.fVmxPostedInt );
1813 pGuestFeat->fVmxIntWindowExit = (pBaseFeat->fVmxIntWindowExit & EmuFeat.fVmxIntWindowExit );
1814 pGuestFeat->fVmxTscOffsetting = (pBaseFeat->fVmxTscOffsetting & EmuFeat.fVmxTscOffsetting );
1815 pGuestFeat->fVmxHltExit = (pBaseFeat->fVmxHltExit & EmuFeat.fVmxHltExit );
1816 pGuestFeat->fVmxInvlpgExit = (pBaseFeat->fVmxInvlpgExit & EmuFeat.fVmxInvlpgExit );
1817 pGuestFeat->fVmxMwaitExit = (pBaseFeat->fVmxMwaitExit & EmuFeat.fVmxMwaitExit );
1818 pGuestFeat->fVmxRdpmcExit = (pBaseFeat->fVmxRdpmcExit & EmuFeat.fVmxRdpmcExit );
1819 pGuestFeat->fVmxRdtscExit = (pBaseFeat->fVmxRdtscExit & EmuFeat.fVmxRdtscExit );
1820 pGuestFeat->fVmxCr3LoadExit = (pBaseFeat->fVmxCr3LoadExit & EmuFeat.fVmxCr3LoadExit );
1821 pGuestFeat->fVmxCr3StoreExit = (pBaseFeat->fVmxCr3StoreExit & EmuFeat.fVmxCr3StoreExit );
1822 pGuestFeat->fVmxTertiaryExecCtls = (pBaseFeat->fVmxTertiaryExecCtls & EmuFeat.fVmxTertiaryExecCtls );
1823 pGuestFeat->fVmxCr8LoadExit = (pBaseFeat->fVmxCr8LoadExit & EmuFeat.fVmxCr8LoadExit );
1824 pGuestFeat->fVmxCr8StoreExit = (pBaseFeat->fVmxCr8StoreExit & EmuFeat.fVmxCr8StoreExit );
1825 pGuestFeat->fVmxUseTprShadow = (pBaseFeat->fVmxUseTprShadow & EmuFeat.fVmxUseTprShadow );
1826 pGuestFeat->fVmxNmiWindowExit = (pBaseFeat->fVmxNmiWindowExit & EmuFeat.fVmxNmiWindowExit );
1827 pGuestFeat->fVmxMovDRxExit = (pBaseFeat->fVmxMovDRxExit & EmuFeat.fVmxMovDRxExit );
1828 pGuestFeat->fVmxUncondIoExit = (pBaseFeat->fVmxUncondIoExit & EmuFeat.fVmxUncondIoExit );
1829 pGuestFeat->fVmxUseIoBitmaps = (pBaseFeat->fVmxUseIoBitmaps & EmuFeat.fVmxUseIoBitmaps );
1830 pGuestFeat->fVmxMonitorTrapFlag = (pBaseFeat->fVmxMonitorTrapFlag & EmuFeat.fVmxMonitorTrapFlag );
1831 pGuestFeat->fVmxUseMsrBitmaps = (pBaseFeat->fVmxUseMsrBitmaps & EmuFeat.fVmxUseMsrBitmaps );
1832 pGuestFeat->fVmxMonitorExit = (pBaseFeat->fVmxMonitorExit & EmuFeat.fVmxMonitorExit );
1833 pGuestFeat->fVmxPauseExit = (pBaseFeat->fVmxPauseExit & EmuFeat.fVmxPauseExit );
1834 pGuestFeat->fVmxSecondaryExecCtls = (pBaseFeat->fVmxSecondaryExecCtls & EmuFeat.fVmxSecondaryExecCtls );
1835 pGuestFeat->fVmxVirtApicAccess = (pBaseFeat->fVmxVirtApicAccess & EmuFeat.fVmxVirtApicAccess );
1836 pGuestFeat->fVmxEpt = (pBaseFeat->fVmxEpt & EmuFeat.fVmxEpt );
1837 pGuestFeat->fVmxDescTableExit = (pBaseFeat->fVmxDescTableExit & EmuFeat.fVmxDescTableExit );
1838 pGuestFeat->fVmxRdtscp = (pBaseFeat->fVmxRdtscp & EmuFeat.fVmxRdtscp );
1839 pGuestFeat->fVmxVirtX2ApicMode = (pBaseFeat->fVmxVirtX2ApicMode & EmuFeat.fVmxVirtX2ApicMode );
1840 pGuestFeat->fVmxVpid = (pBaseFeat->fVmxVpid & EmuFeat.fVmxVpid );
1841 pGuestFeat->fVmxWbinvdExit = (pBaseFeat->fVmxWbinvdExit & EmuFeat.fVmxWbinvdExit );
1842 pGuestFeat->fVmxUnrestrictedGuest = (pBaseFeat->fVmxUnrestrictedGuest & EmuFeat.fVmxUnrestrictedGuest );
1843 pGuestFeat->fVmxApicRegVirt = (pBaseFeat->fVmxApicRegVirt & EmuFeat.fVmxApicRegVirt );
1844 pGuestFeat->fVmxVirtIntDelivery = (pBaseFeat->fVmxVirtIntDelivery & EmuFeat.fVmxVirtIntDelivery );
1845 pGuestFeat->fVmxPauseLoopExit = (pBaseFeat->fVmxPauseLoopExit & EmuFeat.fVmxPauseLoopExit );
1846 pGuestFeat->fVmxRdrandExit = (pBaseFeat->fVmxRdrandExit & EmuFeat.fVmxRdrandExit );
1847 pGuestFeat->fVmxInvpcid = (pBaseFeat->fVmxInvpcid & EmuFeat.fVmxInvpcid );
1848 pGuestFeat->fVmxVmFunc = (pBaseFeat->fVmxVmFunc & EmuFeat.fVmxVmFunc );
1849 pGuestFeat->fVmxVmcsShadowing = (pBaseFeat->fVmxVmcsShadowing & EmuFeat.fVmxVmcsShadowing );
1850 pGuestFeat->fVmxRdseedExit = (pBaseFeat->fVmxRdseedExit & EmuFeat.fVmxRdseedExit );
1851 pGuestFeat->fVmxPml = (pBaseFeat->fVmxPml & EmuFeat.fVmxPml );
1852 pGuestFeat->fVmxEptXcptVe = (pBaseFeat->fVmxEptXcptVe & EmuFeat.fVmxEptXcptVe );
1853 pGuestFeat->fVmxConcealVmxFromPt = (pBaseFeat->fVmxConcealVmxFromPt & EmuFeat.fVmxConcealVmxFromPt );
1854 pGuestFeat->fVmxXsavesXrstors = (pBaseFeat->fVmxXsavesXrstors & EmuFeat.fVmxXsavesXrstors );
1855 pGuestFeat->fVmxModeBasedExecuteEpt = (pBaseFeat->fVmxModeBasedExecuteEpt & EmuFeat.fVmxModeBasedExecuteEpt );
1856 pGuestFeat->fVmxSppEpt = (pBaseFeat->fVmxSppEpt & EmuFeat.fVmxSppEpt );
1857 pGuestFeat->fVmxPtEpt = (pBaseFeat->fVmxPtEpt & EmuFeat.fVmxPtEpt );
1858 pGuestFeat->fVmxUseTscScaling = (pBaseFeat->fVmxUseTscScaling & EmuFeat.fVmxUseTscScaling );
1859 pGuestFeat->fVmxUserWaitPause = (pBaseFeat->fVmxUserWaitPause & EmuFeat.fVmxUserWaitPause );
1860 pGuestFeat->fVmxEnclvExit = (pBaseFeat->fVmxEnclvExit & EmuFeat.fVmxEnclvExit );
1861 pGuestFeat->fVmxLoadIwKeyExit = (pBaseFeat->fVmxLoadIwKeyExit & EmuFeat.fVmxLoadIwKeyExit );
1862 pGuestFeat->fVmxEntryLoadDebugCtls = (pBaseFeat->fVmxEntryLoadDebugCtls & EmuFeat.fVmxEntryLoadDebugCtls );
1863 pGuestFeat->fVmxIa32eModeGuest = (pBaseFeat->fVmxIa32eModeGuest & EmuFeat.fVmxIa32eModeGuest );
1864 pGuestFeat->fVmxEntryLoadEferMsr = (pBaseFeat->fVmxEntryLoadEferMsr & EmuFeat.fVmxEntryLoadEferMsr );
1865 pGuestFeat->fVmxEntryLoadPatMsr = (pBaseFeat->fVmxEntryLoadPatMsr & EmuFeat.fVmxEntryLoadPatMsr );
1866 pGuestFeat->fVmxExitSaveDebugCtls = (pBaseFeat->fVmxExitSaveDebugCtls & EmuFeat.fVmxExitSaveDebugCtls );
1867 pGuestFeat->fVmxHostAddrSpaceSize = (pBaseFeat->fVmxHostAddrSpaceSize & EmuFeat.fVmxHostAddrSpaceSize );
1868 pGuestFeat->fVmxExitAckExtInt = (pBaseFeat->fVmxExitAckExtInt & EmuFeat.fVmxExitAckExtInt );
1869 pGuestFeat->fVmxExitSavePatMsr = (pBaseFeat->fVmxExitSavePatMsr & EmuFeat.fVmxExitSavePatMsr );
1870 pGuestFeat->fVmxExitLoadPatMsr = (pBaseFeat->fVmxExitLoadPatMsr & EmuFeat.fVmxExitLoadPatMsr );
1871 pGuestFeat->fVmxExitSaveEferMsr = (pBaseFeat->fVmxExitSaveEferMsr & EmuFeat.fVmxExitSaveEferMsr );
1872 pGuestFeat->fVmxExitLoadEferMsr = (pBaseFeat->fVmxExitLoadEferMsr & EmuFeat.fVmxExitLoadEferMsr );
1873 pGuestFeat->fVmxSavePreemptTimer = (pBaseFeat->fVmxSavePreemptTimer & EmuFeat.fVmxSavePreemptTimer );
1874 pGuestFeat->fVmxExitSaveEferLma = (pBaseFeat->fVmxExitSaveEferLma & EmuFeat.fVmxExitSaveEferLma );
1875 pGuestFeat->fVmxPt = (pBaseFeat->fVmxPt & EmuFeat.fVmxPt );
1876 pGuestFeat->fVmxVmwriteAll = (pBaseFeat->fVmxVmwriteAll & EmuFeat.fVmxVmwriteAll );
1877 pGuestFeat->fVmxEntryInjectSoftInt = (pBaseFeat->fVmxEntryInjectSoftInt & EmuFeat.fVmxEntryInjectSoftInt );
1878
1879 /* Don't expose VMX preemption timer if host is subject to VMX-preemption timer erratum. */
1880 if ( pGuestFeat->fVmxPreemptTimer
1881 && HMIsSubjectToVmxPreemptTimerErratum())
1882 {
1883 LogRel(("CPUM: Warning! VMX-preemption timer not exposed to guest due to host CPU erratum.\n"));
1884 pGuestFeat->fVmxPreemptTimer = 0;
1885 pGuestFeat->fVmxSavePreemptTimer = 0;
1886 }
1887
1888 /* Sanity checking. */
1889 if (!pGuestFeat->fVmxSecondaryExecCtls)
1890 {
1891 Assert(!pGuestFeat->fVmxVirtApicAccess);
1892 Assert(!pGuestFeat->fVmxEpt);
1893 Assert(!pGuestFeat->fVmxDescTableExit);
1894 Assert(!pGuestFeat->fVmxRdtscp);
1895 Assert(!pGuestFeat->fVmxVirtX2ApicMode);
1896 Assert(!pGuestFeat->fVmxVpid);
1897 Assert(!pGuestFeat->fVmxWbinvdExit);
1898 Assert(!pGuestFeat->fVmxUnrestrictedGuest);
1899 Assert(!pGuestFeat->fVmxApicRegVirt);
1900 Assert(!pGuestFeat->fVmxVirtIntDelivery);
1901 Assert(!pGuestFeat->fVmxPauseLoopExit);
1902 Assert(!pGuestFeat->fVmxRdrandExit);
1903 Assert(!pGuestFeat->fVmxInvpcid);
1904 Assert(!pGuestFeat->fVmxVmFunc);
1905 Assert(!pGuestFeat->fVmxVmcsShadowing);
1906 Assert(!pGuestFeat->fVmxRdseedExit);
1907 Assert(!pGuestFeat->fVmxPml);
1908 Assert(!pGuestFeat->fVmxEptXcptVe);
1909 Assert(!pGuestFeat->fVmxConcealVmxFromPt);
1910 Assert(!pGuestFeat->fVmxXsavesXrstors);
1911 Assert(!pGuestFeat->fVmxModeBasedExecuteEpt);
1912 Assert(!pGuestFeat->fVmxSppEpt);
1913 Assert(!pGuestFeat->fVmxPtEpt);
1914 Assert(!pGuestFeat->fVmxUseTscScaling);
1915 Assert(!pGuestFeat->fVmxUserWaitPause);
1916 Assert(!pGuestFeat->fVmxEnclvExit);
1917 }
1918 else if (pGuestFeat->fVmxUnrestrictedGuest)
1919 {
1920 /* See footnote in Intel spec. 27.2 "Recording VM-Exit Information And Updating VM-entry Control Fields". */
1921 Assert(pGuestFeat->fVmxExitSaveEferLma);
1922 /* Unrestricted guest execution requires EPT. See Intel spec. 25.2.1.1 "VM-Execution Control Fields". */
1923 Assert(pGuestFeat->fVmxEpt);
1924 }
1925
1926 if (!pGuestFeat->fVmxTertiaryExecCtls)
1927 Assert(!pGuestFeat->fVmxLoadIwKeyExit);
1928
1929 /*
1930 * Finally initialize the VMX guest MSRs.
1931 */
1932 cpumR3InitVmxGuestMsrs(pVM, pHostVmxMsrs, pGuestFeat, pGuestVmxMsrs);
1933}
1934
1935
1936/**
1937 * Gets the host hardware-virtualization MSRs.
1938 *
1939 * @returns VBox status code.
1940 * @param pMsrs Where to store the MSRs.
1941 */
1942static int cpumR3GetHostHwvirtMsrs(PCPUMMSRS pMsrs)
1943{
1944 Assert(pMsrs);
1945
1946 uint32_t fCaps = 0;
1947 int rc = SUPR3QueryVTCaps(&fCaps);
1948 if (RT_SUCCESS(rc))
1949 {
1950 if (fCaps & (SUPVTCAPS_VT_X | SUPVTCAPS_AMD_V))
1951 {
1952 SUPHWVIRTMSRS HwvirtMsrs;
1953 rc = SUPR3GetHwvirtMsrs(&HwvirtMsrs, false /* fForceRequery */);
1954 if (RT_SUCCESS(rc))
1955 {
1956 if (fCaps & SUPVTCAPS_VT_X)
1957 HMGetVmxMsrsFromHwvirtMsrs(&HwvirtMsrs, &pMsrs->hwvirt.vmx);
1958 else
1959 HMGetSvmMsrsFromHwvirtMsrs(&HwvirtMsrs, &pMsrs->hwvirt.svm);
1960 return VINF_SUCCESS;
1961 }
1962
1963 LogRel(("CPUM: Querying hardware-virtualization MSRs failed. rc=%Rrc\n", rc));
1964 return rc;
1965 }
1966
1967 LogRel(("CPUM: Querying hardware-virtualization capability succeeded but did not find VT-x or AMD-V\n"));
1968 return VERR_INTERNAL_ERROR_5;
1969 }
1970 LogRel(("CPUM: No hardware-virtualization capability detected\n"));
1971 return VINF_SUCCESS;
1972}
1973
1974
1975/**
1976 * @callback_method_impl{FNTMTIMERINT,
1977 * Callback that fires when the nested VMX-preemption timer expired.}
1978 */
1979static DECLCALLBACK(void) cpumR3VmxPreemptTimerCallback(PVM pVM, TMTIMERHANDLE hTimer, void *pvUser)
1980{
1981 RT_NOREF(pVM, hTimer);
1982 PVMCPU pVCpu = (PVMCPUR3)pvUser;
1983 AssertPtr(pVCpu);
1984 VMCPU_FF_SET(pVCpu, VMCPU_FF_VMX_PREEMPT_TIMER);
1985}
1986
1987
1988/**
1989 * Initializes the CPUM.
1990 *
1991 * @returns VBox status code.
1992 * @param pVM The cross context VM structure.
1993 */
1994VMMR3DECL(int) CPUMR3Init(PVM pVM)
1995{
1996 LogFlow(("CPUMR3Init\n"));
1997
1998 /*
1999 * Assert alignment, sizes and tables.
2000 */
2001 AssertCompileMemberAlignment(VM, cpum.s, 32);
2002 AssertCompile(sizeof(pVM->cpum.s) <= sizeof(pVM->cpum.padding));
2003 AssertCompileSizeAlignment(CPUMCTX, 64);
2004 AssertCompileSizeAlignment(CPUMCTXMSRS, 64);
2005 AssertCompileSizeAlignment(CPUMHOSTCTX, 64);
2006 AssertCompileMemberAlignment(VM, cpum, 64);
2007 AssertCompileMemberAlignment(VMCPU, cpum.s, 64);
2008#ifdef VBOX_STRICT
2009 int rc2 = cpumR3MsrStrictInitChecks();
2010 AssertRCReturn(rc2, rc2);
2011#endif
2012
2013 /*
2014 * Gather info about the host CPU.
2015 */
2016#if defined(RT_ARCH_X86) || defined(RT_ARCH_AMD64)
2017 if (!ASMHasCpuId())
2018 {
2019 LogRel(("The CPU doesn't support CPUID!\n"));
2020 return VERR_UNSUPPORTED_CPU;
2021 }
2022#endif
2023
2024 pVM->cpum.s.fHostMxCsrMask = CPUMR3DeterminHostMxCsrMask();
2025
2026 CPUMMSRS HostMsrs;
2027 RT_ZERO(HostMsrs);
2028 int rc = cpumR3GetHostHwvirtMsrs(&HostMsrs);
2029 AssertLogRelRCReturn(rc, rc);
2030
2031 PCPUMCPUIDLEAF paLeaves;
2032 uint32_t cLeaves;
2033 rc = CPUMR3CpuIdCollectLeaves(&paLeaves, &cLeaves);
2034 AssertLogRelRCReturn(rc, rc);
2035
2036 rc = cpumR3CpuIdExplodeFeatures(paLeaves, cLeaves, &HostMsrs, &pVM->cpum.s.HostFeatures);
2037 RTMemFree(paLeaves);
2038 AssertLogRelRCReturn(rc, rc);
2039 pVM->cpum.s.GuestFeatures.enmCpuVendor = pVM->cpum.s.HostFeatures.enmCpuVendor;
2040
2041 /*
2042 * Check that the CPU supports the minimum features we require.
2043 */
2044 if (!pVM->cpum.s.HostFeatures.fFxSaveRstor)
2045 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support the FXSAVE/FXRSTOR instruction.");
2046 if (!pVM->cpum.s.HostFeatures.fMmx)
2047 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support MMX.");
2048 if (!pVM->cpum.s.HostFeatures.fTsc)
2049 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support RDTSC.");
2050
2051 /*
2052 * Setup the CR4 AND and OR masks used in the raw-mode switcher.
2053 */
2054 pVM->cpum.s.CR4.AndMask = X86_CR4_OSXMMEEXCPT | X86_CR4_PVI | X86_CR4_VME;
2055 pVM->cpum.s.CR4.OrMask = X86_CR4_OSFXSR;
2056
2057 /*
2058 * Figure out which XSAVE/XRSTOR features are available on the host.
2059 */
2060 uint64_t fXcr0Host = 0;
2061 uint64_t fXStateHostMask = 0;
2062#if defined(RT_ARCH_X86) || defined(RT_ARCH_AMD64)
2063 if ( pVM->cpum.s.HostFeatures.fXSaveRstor
2064 && pVM->cpum.s.HostFeatures.fOpSysXSaveRstor)
2065 {
2066 fXStateHostMask = fXcr0Host = ASMGetXcr0();
2067 fXStateHostMask &= XSAVE_C_X87 | XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI;
2068 AssertLogRelMsgStmt((fXStateHostMask & (XSAVE_C_X87 | XSAVE_C_SSE)) == (XSAVE_C_X87 | XSAVE_C_SSE),
2069 ("%#llx\n", fXStateHostMask), fXStateHostMask = 0);
2070 }
2071#endif
2072 pVM->cpum.s.fXStateHostMask = fXStateHostMask;
2073 LogRel(("CPUM: fXStateHostMask=%#llx; initial: %#llx; host XCR0=%#llx\n",
2074 pVM->cpum.s.fXStateHostMask, fXStateHostMask, fXcr0Host));
2075
2076 /*
2077 * Initialize the host XSAVE/XRSTOR mask.
2078 */
2079 uint32_t cbMaxXState = pVM->cpum.s.HostFeatures.cbMaxExtendedState;
2080 cbMaxXState = RT_ALIGN(cbMaxXState, 128);
2081 AssertLogRelReturn( pVM->cpum.s.HostFeatures.cbMaxExtendedState >= sizeof(X86FXSTATE)
2082 && pVM->cpum.s.HostFeatures.cbMaxExtendedState <= sizeof(pVM->apCpusR3[0]->cpum.s.Host.XState)
2083 && pVM->cpum.s.HostFeatures.cbMaxExtendedState <= sizeof(pVM->apCpusR3[0]->cpum.s.Guest.XState)
2084 , VERR_CPUM_IPE_2);
2085
2086 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2087 {
2088 PVMCPU pVCpu = pVM->apCpusR3[i];
2089
2090 pVCpu->cpum.s.Host.fXStateMask = fXStateHostMask;
2091 pVCpu->cpum.s.hNestedVmxPreemptTimer = NIL_TMTIMERHANDLE;
2092 }
2093
2094 /*
2095 * Register saved state data item.
2096 */
2097 rc = SSMR3RegisterInternal(pVM, "cpum", 1, CPUM_SAVED_STATE_VERSION, sizeof(CPUM),
2098 NULL, cpumR3LiveExec, NULL,
2099 NULL, cpumR3SaveExec, NULL,
2100 cpumR3LoadPrep, cpumR3LoadExec, cpumR3LoadDone);
2101 if (RT_FAILURE(rc))
2102 return rc;
2103
2104 /*
2105 * Register info handlers and registers with the debugger facility.
2106 */
2107 DBGFR3InfoRegisterInternalEx(pVM, "cpum", "Displays the all the cpu states.",
2108 &cpumR3InfoAll, DBGFINFO_FLAGS_ALL_EMTS);
2109 DBGFR3InfoRegisterInternalEx(pVM, "cpumguest", "Displays the guest cpu state.",
2110 &cpumR3InfoGuest, DBGFINFO_FLAGS_ALL_EMTS);
2111 DBGFR3InfoRegisterInternalEx(pVM, "cpumguesthwvirt", "Displays the guest hwvirt. cpu state.",
2112 &cpumR3InfoGuestHwvirt, DBGFINFO_FLAGS_ALL_EMTS);
2113 DBGFR3InfoRegisterInternalEx(pVM, "cpumhyper", "Displays the hypervisor cpu state.",
2114 &cpumR3InfoHyper, DBGFINFO_FLAGS_ALL_EMTS);
2115 DBGFR3InfoRegisterInternalEx(pVM, "cpumhost", "Displays the host cpu state.",
2116 &cpumR3InfoHost, DBGFINFO_FLAGS_ALL_EMTS);
2117 DBGFR3InfoRegisterInternalEx(pVM, "cpumguestinstr", "Displays the current guest instruction.",
2118 &cpumR3InfoGuestInstr, DBGFINFO_FLAGS_ALL_EMTS);
2119 DBGFR3InfoRegisterInternal( pVM, "cpuid", "Displays the guest cpuid leaves.", &cpumR3CpuIdInfo);
2120 DBGFR3InfoRegisterInternal( pVM, "cpumvmxfeat", "Displays the host and guest VMX hwvirt. features.",
2121 &cpumR3InfoVmxFeatures);
2122
2123 rc = cpumR3DbgInit(pVM);
2124 if (RT_FAILURE(rc))
2125 return rc;
2126
2127#if defined(RT_ARCH_X86) || defined(RT_ARCH_AMD64)
2128 /*
2129 * Check if we need to workaround partial/leaky FPU handling.
2130 */
2131 cpumR3CheckLeakyFpu(pVM);
2132#endif
2133
2134 /*
2135 * Initialize the Guest CPUID and MSR states.
2136 */
2137 rc = cpumR3InitCpuIdAndMsrs(pVM, &HostMsrs);
2138 if (RT_FAILURE(rc))
2139 return rc;
2140
2141 /*
2142 * Init the VMX/SVM state.
2143 *
2144 * This must be done after initializing CPUID/MSR features as we access the
2145 * the VMX/SVM guest features below.
2146 *
2147 * In the case of nested VT-x, we also need to create the per-VCPU
2148 * VMX preemption timers.
2149 */
2150 if (pVM->cpum.s.GuestFeatures.fVmx)
2151 cpumR3InitVmxHwVirtState(pVM);
2152 else if (pVM->cpum.s.GuestFeatures.fSvm)
2153 cpumR3InitSvmHwVirtState(pVM);
2154 else
2155 Assert(pVM->apCpusR3[0]->cpum.s.Guest.hwvirt.enmHwvirt == CPUMHWVIRT_NONE);
2156
2157 CPUMR3Reset(pVM);
2158 return VINF_SUCCESS;
2159}
2160
2161
2162/**
2163 * Applies relocations to data and code managed by this
2164 * component. This function will be called at init and
2165 * whenever the VMM need to relocate it self inside the GC.
2166 *
2167 * The CPUM will update the addresses used by the switcher.
2168 *
2169 * @param pVM The cross context VM structure.
2170 */
2171VMMR3DECL(void) CPUMR3Relocate(PVM pVM)
2172{
2173 RT_NOREF(pVM);
2174}
2175
2176
2177/**
2178 * Terminates the CPUM.
2179 *
2180 * Termination means cleaning up and freeing all resources,
2181 * the VM it self is at this point powered off or suspended.
2182 *
2183 * @returns VBox status code.
2184 * @param pVM The cross context VM structure.
2185 */
2186VMMR3DECL(int) CPUMR3Term(PVM pVM)
2187{
2188#ifdef VBOX_WITH_CRASHDUMP_MAGIC
2189 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2190 {
2191 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2192 memset(pVCpu->cpum.s.aMagic, 0, sizeof(pVCpu->cpum.s.aMagic));
2193 pVCpu->cpum.s.uMagic = 0;
2194 pvCpu->cpum.s.Guest.dr[5] = 0;
2195 }
2196#endif
2197
2198 if (pVM->cpum.s.GuestFeatures.fVmx)
2199 {
2200 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2201 {
2202 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2203 if (pVCpu->cpum.s.hNestedVmxPreemptTimer != NIL_TMTIMERHANDLE)
2204 {
2205 int rc = TMR3TimerDestroy(pVM, pVCpu->cpum.s.hNestedVmxPreemptTimer); AssertRC(rc);
2206 pVCpu->cpum.s.hNestedVmxPreemptTimer = NIL_TMTIMERHANDLE;
2207 }
2208 }
2209 }
2210 return VINF_SUCCESS;
2211}
2212
2213
2214/**
2215 * Resets a virtual CPU.
2216 *
2217 * Used by CPUMR3Reset and CPU hot plugging.
2218 *
2219 * @param pVM The cross context VM structure.
2220 * @param pVCpu The cross context virtual CPU structure of the CPU that is
2221 * being reset. This may differ from the current EMT.
2222 */
2223VMMR3DECL(void) CPUMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
2224{
2225 /** @todo anything different for VCPU > 0? */
2226 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2227
2228 /*
2229 * Initialize everything to ZERO first.
2230 */
2231 uint32_t fUseFlags = pVCpu->cpum.s.fUseFlags & ~CPUM_USED_FPU_SINCE_REM;
2232
2233 RT_BZERO(pCtx, RT_UOFFSETOF(CPUMCTX, aoffXState));
2234
2235 pVCpu->cpum.s.fUseFlags = fUseFlags;
2236
2237 pCtx->cr0 = X86_CR0_CD | X86_CR0_NW | X86_CR0_ET; //0x60000010
2238 pCtx->eip = 0x0000fff0;
2239 pCtx->edx = 0x00000600; /* P6 processor */
2240 pCtx->eflags.Bits.u1Reserved0 = 1;
2241
2242 pCtx->cs.Sel = 0xf000;
2243 pCtx->cs.ValidSel = 0xf000;
2244 pCtx->cs.fFlags = CPUMSELREG_FLAGS_VALID;
2245 pCtx->cs.u64Base = UINT64_C(0xffff0000);
2246 pCtx->cs.u32Limit = 0x0000ffff;
2247 pCtx->cs.Attr.n.u1DescType = 1; /* code/data segment */
2248 pCtx->cs.Attr.n.u1Present = 1;
2249 pCtx->cs.Attr.n.u4Type = X86_SEL_TYPE_ER_ACC;
2250
2251 pCtx->ds.fFlags = CPUMSELREG_FLAGS_VALID;
2252 pCtx->ds.u32Limit = 0x0000ffff;
2253 pCtx->ds.Attr.n.u1DescType = 1; /* code/data segment */
2254 pCtx->ds.Attr.n.u1Present = 1;
2255 pCtx->ds.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2256
2257 pCtx->es.fFlags = CPUMSELREG_FLAGS_VALID;
2258 pCtx->es.u32Limit = 0x0000ffff;
2259 pCtx->es.Attr.n.u1DescType = 1; /* code/data segment */
2260 pCtx->es.Attr.n.u1Present = 1;
2261 pCtx->es.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2262
2263 pCtx->fs.fFlags = CPUMSELREG_FLAGS_VALID;
2264 pCtx->fs.u32Limit = 0x0000ffff;
2265 pCtx->fs.Attr.n.u1DescType = 1; /* code/data segment */
2266 pCtx->fs.Attr.n.u1Present = 1;
2267 pCtx->fs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2268
2269 pCtx->gs.fFlags = CPUMSELREG_FLAGS_VALID;
2270 pCtx->gs.u32Limit = 0x0000ffff;
2271 pCtx->gs.Attr.n.u1DescType = 1; /* code/data segment */
2272 pCtx->gs.Attr.n.u1Present = 1;
2273 pCtx->gs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2274
2275 pCtx->ss.fFlags = CPUMSELREG_FLAGS_VALID;
2276 pCtx->ss.u32Limit = 0x0000ffff;
2277 pCtx->ss.Attr.n.u1Present = 1;
2278 pCtx->ss.Attr.n.u1DescType = 1; /* code/data segment */
2279 pCtx->ss.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2280
2281 pCtx->idtr.cbIdt = 0xffff;
2282 pCtx->gdtr.cbGdt = 0xffff;
2283
2284 pCtx->ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2285 pCtx->ldtr.u32Limit = 0xffff;
2286 pCtx->ldtr.Attr.n.u1Present = 1;
2287 pCtx->ldtr.Attr.n.u4Type = X86_SEL_TYPE_SYS_LDT;
2288
2289 pCtx->tr.fFlags = CPUMSELREG_FLAGS_VALID;
2290 pCtx->tr.u32Limit = 0xffff;
2291 pCtx->tr.Attr.n.u1Present = 1;
2292 pCtx->tr.Attr.n.u4Type = X86_SEL_TYPE_SYS_386_TSS_BUSY; /* Deduction, not properly documented by Intel. */
2293
2294 pCtx->dr[6] = X86_DR6_INIT_VAL;
2295 pCtx->dr[7] = X86_DR7_INIT_VAL;
2296
2297 PX86FXSTATE pFpuCtx = &pCtx->XState.x87;
2298 pFpuCtx->FTW = 0x00; /* All empty (abbridged tag reg edition). */
2299 pFpuCtx->FCW = 0x37f;
2300
2301 /* Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A, Table 8-1.
2302 IA-32 Processor States Following Power-up, Reset, or INIT */
2303 pFpuCtx->MXCSR = 0x1F80;
2304 pFpuCtx->MXCSR_MASK = pVM->cpum.s.GuestInfo.fMxCsrMask; /** @todo check if REM messes this up... */
2305
2306 pCtx->aXcr[0] = XSAVE_C_X87;
2307 if (pVM->cpum.s.HostFeatures.cbMaxExtendedState >= RT_UOFFSETOF(X86XSAVEAREA, Hdr))
2308 {
2309 /* The entire FXSAVE state needs loading when we switch to XSAVE/XRSTOR
2310 as we don't know what happened before. (Bother optimize later?) */
2311 pCtx->XState.Hdr.bmXState = XSAVE_C_X87 | XSAVE_C_SSE;
2312 }
2313
2314 /*
2315 * MSRs.
2316 */
2317 /* Init PAT MSR */
2318 pCtx->msrPAT = MSR_IA32_CR_PAT_INIT_VAL;
2319
2320 /* EFER MBZ; see AMD64 Architecture Programmer's Manual Volume 2: Table 14-1. Initial Processor State.
2321 * The Intel docs don't mention it. */
2322 Assert(!pCtx->msrEFER);
2323
2324 /* IA32_MISC_ENABLE - not entirely sure what the init/reset state really
2325 is supposed to be here, just trying provide useful/sensible values. */
2326 PCPUMMSRRANGE pRange = cpumLookupMsrRange(pVM, MSR_IA32_MISC_ENABLE);
2327 if (pRange)
2328 {
2329 pVCpu->cpum.s.GuestMsrs.msr.MiscEnable = MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
2330 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL
2331 | (pVM->cpum.s.GuestFeatures.fMonitorMWait ? MSR_IA32_MISC_ENABLE_MONITOR : 0)
2332 | MSR_IA32_MISC_ENABLE_FAST_STRINGS;
2333 pRange->fWrIgnMask |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
2334 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
2335 pRange->fWrGpMask &= ~pVCpu->cpum.s.GuestMsrs.msr.MiscEnable;
2336 }
2337
2338 /** @todo Wire IA32_MISC_ENABLE bit 22 to our NT 4 CPUID trick. */
2339
2340 /** @todo r=ramshankar: Currently broken for SMP as TMCpuTickSet() expects to be
2341 * called from each EMT while we're getting called by CPUMR3Reset()
2342 * iteratively on the same thread. Fix later. */
2343#if 0 /** @todo r=bird: This we will do in TM, not here. */
2344 /* TSC must be 0. Intel spec. Table 9-1. "IA-32 Processor States Following Power-up, Reset, or INIT." */
2345 CPUMSetGuestMsr(pVCpu, MSR_IA32_TSC, 0);
2346#endif
2347
2348
2349 /* C-state control. Guesses. */
2350 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 1 /*C1*/ | RT_BIT_32(25) | RT_BIT_32(26) | RT_BIT_32(27) | RT_BIT_32(28);
2351 /* For Nehalem+ and Atoms, the 0xE2 MSR (MSR_PKG_CST_CONFIG_CONTROL) is documented. For Core 2,
2352 * it's undocumented but exists as MSR_PMG_CST_CONFIG_CONTROL and has similar but not identical
2353 * functionality. The default value must be different due to incompatible write mask.
2354 */
2355 if (CPUMMICROARCH_IS_INTEL_CORE2(pVM->cpum.s.GuestFeatures.enmMicroarch))
2356 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x202a01; /* From Mac Pro Harpertown, unlocked. */
2357 else if (pVM->cpum.s.GuestFeatures.enmMicroarch == kCpumMicroarch_Intel_Core_Yonah)
2358 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x26740c; /* From MacBookPro1,1. */
2359
2360 /*
2361 * Hardware virtualization state.
2362 */
2363 CPUMSetGuestGif(pCtx, true);
2364 Assert(!pVM->cpum.s.GuestFeatures.fVmx || !pVM->cpum.s.GuestFeatures.fSvm); /* Paranoia. */
2365 if (pVM->cpum.s.GuestFeatures.fVmx)
2366 cpumR3ResetVmxHwVirtState(pVCpu);
2367 else if (pVM->cpum.s.GuestFeatures.fSvm)
2368 cpumR3ResetSvmHwVirtState(pVCpu);
2369}
2370
2371
2372/**
2373 * Resets the CPU.
2374 *
2375 * @returns VINF_SUCCESS.
2376 * @param pVM The cross context VM structure.
2377 */
2378VMMR3DECL(void) CPUMR3Reset(PVM pVM)
2379{
2380 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2381 {
2382 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2383 CPUMR3ResetCpu(pVM, pVCpu);
2384
2385#ifdef VBOX_WITH_CRASHDUMP_MAGIC
2386
2387 /* Magic marker for searching in crash dumps. */
2388 strcpy((char *)pVCpu->.cpum.s.aMagic, "CPUMCPU Magic");
2389 pVCpu->cpum.s.uMagic = UINT64_C(0xDEADBEEFDEADBEEF);
2390 pVCpu->cpum.s.Guest->dr[5] = UINT64_C(0xDEADBEEFDEADBEEF);
2391#endif
2392 }
2393}
2394
2395
2396
2397
2398/**
2399 * Pass 0 live exec callback.
2400 *
2401 * @returns VINF_SSM_DONT_CALL_AGAIN.
2402 * @param pVM The cross context VM structure.
2403 * @param pSSM The saved state handle.
2404 * @param uPass The pass (0).
2405 */
2406static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass)
2407{
2408 AssertReturn(uPass == 0, VERR_SSM_UNEXPECTED_PASS);
2409 cpumR3SaveCpuId(pVM, pSSM);
2410 return VINF_SSM_DONT_CALL_AGAIN;
2411}
2412
2413
2414/**
2415 * Execute state save operation.
2416 *
2417 * @returns VBox status code.
2418 * @param pVM The cross context VM structure.
2419 * @param pSSM SSM operation handle.
2420 */
2421static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM)
2422{
2423 /*
2424 * Save.
2425 */
2426 SSMR3PutU32(pSSM, pVM->cCpus);
2427 SSMR3PutU32(pSSM, sizeof(pVM->apCpusR3[0]->cpum.s.GuestMsrs.msr));
2428 CPUMCTX DummyHyperCtx;
2429 RT_ZERO(DummyHyperCtx);
2430 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2431 {
2432 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2433
2434 SSMR3PutStructEx(pSSM, &DummyHyperCtx, sizeof(DummyHyperCtx), 0, g_aCpumCtxFields, NULL);
2435
2436 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2437 SSMR3PutStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2438 SSMR3PutStructEx(pSSM, &pGstCtx->XState.x87, sizeof(pGstCtx->XState.x87), 0, g_aCpumX87Fields, NULL);
2439 if (pGstCtx->fXStateMask != 0)
2440 SSMR3PutStructEx(pSSM, &pGstCtx->XState.Hdr, sizeof(pGstCtx->XState.Hdr), 0, g_aCpumXSaveHdrFields, NULL);
2441 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2442 {
2443 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
2444 SSMR3PutStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2445 }
2446 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2447 {
2448 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
2449 SSMR3PutStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2450 }
2451 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2452 {
2453 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
2454 SSMR3PutStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2455 }
2456 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2457 {
2458 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
2459 SSMR3PutStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2460 }
2461 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2462 {
2463 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
2464 SSMR3PutStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2465 }
2466 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[0].u);
2467 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[1].u);
2468 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[2].u);
2469 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[3].u);
2470 if (pVM->cpum.s.GuestFeatures.fSvm)
2471 {
2472 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uMsrHSavePa);
2473 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.svm.GCPhysVmcb);
2474 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uPrevPauseTick);
2475 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilter);
2476 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2477 SSMR3PutBool(pSSM, pGstCtx->hwvirt.svm.fInterceptEvents);
2478 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState), 0 /* fFlags */,
2479 g_aSvmHwvirtHostState, NULL /* pvUser */);
2480 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.svm.Vmcb, sizeof(pGstCtx->hwvirt.svm.Vmcb));
2481 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.svm.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.svm.abMsrBitmap));
2482 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.svm.abIoBitmap[0], sizeof(pGstCtx->hwvirt.svm.abIoBitmap));
2483 SSMR3PutU32(pSSM, pGstCtx->hwvirt.fLocalForcedActions);
2484 SSMR3PutBool(pSSM, pGstCtx->hwvirt.fGif);
2485 }
2486 if (pVM->cpum.s.GuestFeatures.fVmx)
2487 {
2488 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysVmxon);
2489 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysVmcs);
2490 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysShadowVmcs);
2491 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInVmxRootMode);
2492 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInVmxNonRootMode);
2493 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInterceptEvents);
2494 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fNmiUnblockingIret);
2495 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.vmx.Vmcs, sizeof(pGstCtx->hwvirt.vmx.Vmcs), 0, g_aVmxHwvirtVmcs, NULL);
2496 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.vmx.ShadowVmcs, sizeof(pGstCtx->hwvirt.vmx.ShadowVmcs),
2497 0, g_aVmxHwvirtVmcs, NULL);
2498 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abVmreadBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmreadBitmap));
2499 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abVmwriteBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmwriteBitmap));
2500 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.aEntryMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aEntryMsrLoadArea));
2501 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrStoreArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrStoreArea));
2502 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrLoadArea));
2503 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abMsrBitmap));
2504 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abIoBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abIoBitmap));
2505 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uFirstPauseLoopTick);
2506 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uPrevPauseTick);
2507 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uEntryTick);
2508 SSMR3PutU16(pSSM, pGstCtx->hwvirt.vmx.offVirtApicWrite);
2509 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fVirtNmiBlocking);
2510 SSMR3PutU64(pSSM, MSR_IA32_FEATURE_CONTROL_LOCK | MSR_IA32_FEATURE_CONTROL_VMXON); /* Deprecated since 2021/09/22. Value kept backwards compatibile with 6.1.26. */
2511 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Basic);
2512 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.PinCtls.u);
2513 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ProcCtls.u);
2514 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ProcCtls2.u);
2515 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ExitCtls.u);
2516 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.EntryCtls.u);
2517 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TruePinCtls.u);
2518 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueProcCtls.u);
2519 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueEntryCtls.u);
2520 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueExitCtls.u);
2521 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Misc);
2522 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed0);
2523 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed1);
2524 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed0);
2525 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed1);
2526 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64VmcsEnum);
2527 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64VmFunc);
2528 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64EptVpidCaps);
2529 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64ProcCtls3);
2530 }
2531 SSMR3PutU32(pSSM, pVCpu->cpum.s.fUseFlags);
2532 SSMR3PutU32(pSSM, pVCpu->cpum.s.fChanged);
2533 AssertCompileSizeAlignment(pVCpu->cpum.s.GuestMsrs.msr, sizeof(uint64_t));
2534 SSMR3PutMem(pSSM, &pVCpu->cpum.s.GuestMsrs, sizeof(pVCpu->cpum.s.GuestMsrs.msr));
2535 }
2536
2537 cpumR3SaveCpuId(pVM, pSSM);
2538 return VINF_SUCCESS;
2539}
2540
2541
2542/**
2543 * @callback_method_impl{FNSSMINTLOADPREP}
2544 */
2545static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM)
2546{
2547 NOREF(pSSM);
2548 pVM->cpum.s.fPendingRestore = true;
2549 return VINF_SUCCESS;
2550}
2551
2552
2553/**
2554 * @callback_method_impl{FNSSMINTLOADEXEC}
2555 */
2556static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
2557{
2558 int rc; /* Only for AssertRCReturn use. */
2559
2560 /*
2561 * Validate version.
2562 */
2563 if ( uVersion != CPUM_SAVED_STATE_VERSION_PAE_PDPES
2564 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2
2565 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_VMX
2566 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_SVM
2567 && uVersion != CPUM_SAVED_STATE_VERSION_XSAVE
2568 && uVersion != CPUM_SAVED_STATE_VERSION_GOOD_CPUID_COUNT
2569 && uVersion != CPUM_SAVED_STATE_VERSION_BAD_CPUID_COUNT
2570 && uVersion != CPUM_SAVED_STATE_VERSION_PUT_STRUCT
2571 && uVersion != CPUM_SAVED_STATE_VERSION_MEM
2572 && uVersion != CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE
2573 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_2
2574 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_0
2575 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR
2576 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_0
2577 && uVersion != CPUM_SAVED_STATE_VERSION_VER1_6)
2578 {
2579 AssertMsgFailed(("cpumR3LoadExec: Invalid version uVersion=%d!\n", uVersion));
2580 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2581 }
2582
2583 if (uPass == SSM_PASS_FINAL)
2584 {
2585 /*
2586 * Set the size of RTGCPTR for SSMR3GetGCPtr. (Only necessary for
2587 * really old SSM file versions.)
2588 */
2589 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2590 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR32));
2591 else if (uVersion <= CPUM_SAVED_STATE_VERSION_VER3_0)
2592 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR));
2593
2594 /*
2595 * Figure x86 and ctx field definitions to use for older states.
2596 */
2597 uint32_t const fLoad = uVersion > CPUM_SAVED_STATE_VERSION_MEM ? 0 : SSMSTRUCT_FLAGS_MEM_BAND_AID_RELAXED;
2598 PCSSMFIELD paCpumCtx1Fields = g_aCpumX87Fields;
2599 PCSSMFIELD paCpumCtx2Fields = g_aCpumCtxFields;
2600 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2601 {
2602 paCpumCtx1Fields = g_aCpumX87FieldsV16;
2603 paCpumCtx2Fields = g_aCpumCtxFieldsV16;
2604 }
2605 else if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2606 {
2607 paCpumCtx1Fields = g_aCpumX87FieldsMem;
2608 paCpumCtx2Fields = g_aCpumCtxFieldsMem;
2609 }
2610
2611 /*
2612 * The hyper state used to preceed the CPU count. Starting with
2613 * XSAVE it was moved down till after we've got the count.
2614 */
2615 CPUMCTX HyperCtxIgnored;
2616 if (uVersion < CPUM_SAVED_STATE_VERSION_XSAVE)
2617 {
2618 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2619 {
2620 X86FXSTATE Ign;
2621 SSMR3GetStructEx(pSSM, &Ign, sizeof(Ign), fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2622 SSMR3GetStructEx(pSSM, &HyperCtxIgnored, sizeof(HyperCtxIgnored),
2623 fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2624 }
2625 }
2626
2627 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR)
2628 {
2629 uint32_t cCpus;
2630 rc = SSMR3GetU32(pSSM, &cCpus); AssertRCReturn(rc, rc);
2631 AssertLogRelMsgReturn(cCpus == pVM->cCpus, ("Mismatching CPU counts: saved: %u; configured: %u \n", cCpus, pVM->cCpus),
2632 VERR_SSM_UNEXPECTED_DATA);
2633 }
2634 AssertLogRelMsgReturn( uVersion > CPUM_SAVED_STATE_VERSION_VER2_0
2635 || pVM->cCpus == 1,
2636 ("cCpus=%u\n", pVM->cCpus),
2637 VERR_SSM_UNEXPECTED_DATA);
2638
2639 uint32_t cbMsrs = 0;
2640 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2641 {
2642 rc = SSMR3GetU32(pSSM, &cbMsrs); AssertRCReturn(rc, rc);
2643 AssertLogRelMsgReturn(RT_ALIGN(cbMsrs, sizeof(uint64_t)) == cbMsrs, ("Size of MSRs is misaligned: %#x\n", cbMsrs),
2644 VERR_SSM_UNEXPECTED_DATA);
2645 AssertLogRelMsgReturn(cbMsrs <= sizeof(CPUMCTXMSRS) && cbMsrs > 0, ("Size of MSRs is out of range: %#x\n", cbMsrs),
2646 VERR_SSM_UNEXPECTED_DATA);
2647 }
2648
2649 /*
2650 * Do the per-CPU restoring.
2651 */
2652 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2653 {
2654 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2655 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2656
2657 if (uVersion >= CPUM_SAVED_STATE_VERSION_XSAVE)
2658 {
2659 /*
2660 * The XSAVE saved state layout moved the hyper state down here.
2661 */
2662 rc = SSMR3GetStructEx(pSSM, &HyperCtxIgnored, sizeof(HyperCtxIgnored), 0, g_aCpumCtxFields, NULL);
2663 AssertRCReturn(rc, rc);
2664
2665 /*
2666 * Start by restoring the CPUMCTX structure and the X86FXSAVE bits of the extended state.
2667 */
2668 rc = SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2669 rc = SSMR3GetStructEx(pSSM, &pGstCtx->XState.x87, sizeof(pGstCtx->XState.x87), 0, g_aCpumX87Fields, NULL);
2670 AssertRCReturn(rc, rc);
2671
2672 /* Check that the xsave/xrstor mask is valid (invalid results in #GP). */
2673 if (pGstCtx->fXStateMask != 0)
2674 {
2675 AssertLogRelMsgReturn(!(pGstCtx->fXStateMask & ~pVM->cpum.s.fXStateGuestMask),
2676 ("fXStateMask=%#RX64 fXStateGuestMask=%#RX64\n",
2677 pGstCtx->fXStateMask, pVM->cpum.s.fXStateGuestMask),
2678 VERR_CPUM_INCOMPATIBLE_XSAVE_COMP_MASK);
2679 AssertLogRelMsgReturn(pGstCtx->fXStateMask & XSAVE_C_X87,
2680 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2681 AssertLogRelMsgReturn((pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2682 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2683 AssertLogRelMsgReturn( (pGstCtx->fXStateMask & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2684 || (pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2685 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2686 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2687 }
2688
2689 /* Check that the XCR0 mask is valid (invalid results in #GP). */
2690 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87, ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XCR0);
2691 if (pGstCtx->aXcr[0] != XSAVE_C_X87)
2692 {
2693 AssertLogRelMsgReturn(!(pGstCtx->aXcr[0] & ~(pGstCtx->fXStateMask | XSAVE_C_X87)),
2694 ("xcr0=%#RX64 fXStateMask=%#RX64\n", pGstCtx->aXcr[0], pGstCtx->fXStateMask),
2695 VERR_CPUM_INVALID_XCR0);
2696 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87,
2697 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2698 AssertLogRelMsgReturn((pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2699 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2700 AssertLogRelMsgReturn( (pGstCtx->aXcr[0] & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2701 || (pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2702 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2703 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2704 }
2705
2706 /* Check that the XCR1 is zero, as we don't implement it yet. */
2707 AssertLogRelMsgReturn(!pGstCtx->aXcr[1], ("xcr1=%#RX64\n", pGstCtx->aXcr[1]), VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2708
2709 /*
2710 * Restore the individual extended state components we support.
2711 */
2712 if (pGstCtx->fXStateMask != 0)
2713 {
2714 rc = SSMR3GetStructEx(pSSM, &pGstCtx->XState.Hdr, sizeof(pGstCtx->XState.Hdr),
2715 0, g_aCpumXSaveHdrFields, NULL);
2716 AssertRCReturn(rc, rc);
2717 AssertLogRelMsgReturn(!(pGstCtx->XState.Hdr.bmXState & ~pGstCtx->fXStateMask),
2718 ("bmXState=%#RX64 fXStateMask=%#RX64\n",
2719 pGstCtx->XState.Hdr.bmXState, pGstCtx->fXStateMask),
2720 VERR_CPUM_INVALID_XSAVE_HDR);
2721 }
2722 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2723 {
2724 PX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PX86XSAVEYMMHI);
2725 SSMR3GetStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2726 }
2727 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2728 {
2729 PX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PX86XSAVEBNDREGS);
2730 SSMR3GetStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2731 }
2732 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2733 {
2734 PX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PX86XSAVEBNDCFG);
2735 SSMR3GetStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2736 }
2737 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2738 {
2739 PX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PX86XSAVEZMMHI256);
2740 SSMR3GetStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2741 }
2742 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2743 {
2744 PX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PX86XSAVEZMM16HI);
2745 SSMR3GetStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2746 }
2747 if (uVersion >= CPUM_SAVED_STATE_VERSION_PAE_PDPES)
2748 {
2749 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[0].u);
2750 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[1].u);
2751 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[2].u);
2752 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[3].u);
2753 }
2754 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_SVM)
2755 {
2756 if (pVM->cpum.s.GuestFeatures.fSvm)
2757 {
2758 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uMsrHSavePa);
2759 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.svm.GCPhysVmcb);
2760 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uPrevPauseTick);
2761 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilter);
2762 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2763 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.svm.fInterceptEvents);
2764 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState),
2765 0 /* fFlags */, g_aSvmHwvirtHostState, NULL /* pvUser */);
2766 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.svm.Vmcb, sizeof(pGstCtx->hwvirt.svm.Vmcb));
2767 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.svm.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.svm.abMsrBitmap));
2768 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.svm.abIoBitmap[0], sizeof(pGstCtx->hwvirt.svm.abIoBitmap));
2769 SSMR3GetU32(pSSM, &pGstCtx->hwvirt.fLocalForcedActions);
2770 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.fGif);
2771 }
2772 }
2773 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_VMX)
2774 {
2775 if (pVM->cpum.s.GuestFeatures.fVmx)
2776 {
2777 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysVmxon);
2778 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysVmcs);
2779 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysShadowVmcs);
2780 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInVmxRootMode);
2781 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInVmxNonRootMode);
2782 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInterceptEvents);
2783 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fNmiUnblockingIret);
2784 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.vmx.Vmcs, sizeof(pGstCtx->hwvirt.vmx.Vmcs),
2785 0, g_aVmxHwvirtVmcs, NULL);
2786 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.vmx.ShadowVmcs, sizeof(pGstCtx->hwvirt.vmx.ShadowVmcs),
2787 0, g_aVmxHwvirtVmcs, NULL);
2788 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abVmreadBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmreadBitmap));
2789 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abVmwriteBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmwriteBitmap));
2790 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.aEntryMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aEntryMsrLoadArea));
2791 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrStoreArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrStoreArea));
2792 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrLoadArea));
2793 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abMsrBitmap));
2794 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abIoBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abIoBitmap));
2795 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uFirstPauseLoopTick);
2796 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uPrevPauseTick);
2797 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uEntryTick);
2798 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.vmx.offVirtApicWrite);
2799 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fVirtNmiBlocking);
2800 SSMR3Skip(pSSM, sizeof(uint64_t)); /* Unused - used to be IA32_FEATURE_CONTROL, see @bugref{10106}. */
2801 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Basic);
2802 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.PinCtls.u);
2803 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ProcCtls.u);
2804 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ProcCtls2.u);
2805 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ExitCtls.u);
2806 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.EntryCtls.u);
2807 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TruePinCtls.u);
2808 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueProcCtls.u);
2809 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueEntryCtls.u);
2810 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueExitCtls.u);
2811 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Misc);
2812 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed0);
2813 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed1);
2814 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed0);
2815 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed1);
2816 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64VmcsEnum);
2817 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64VmFunc);
2818 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64EptVpidCaps);
2819 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2)
2820 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64ProcCtls3);
2821 }
2822 }
2823 }
2824 else
2825 {
2826 /*
2827 * Pre XSAVE saved state.
2828 */
2829 SSMR3GetStructEx(pSSM, &pGstCtx->XState.x87, sizeof(pGstCtx->XState.x87),
2830 fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2831 SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2832 }
2833
2834 /*
2835 * Restore a couple of flags and the MSRs.
2836 */
2837 uint32_t fIgnoredUsedFlags = 0;
2838 rc = SSMR3GetU32(pSSM, &fIgnoredUsedFlags); /* we're recalc the two relevant flags after loading state. */
2839 AssertRCReturn(rc, rc);
2840 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fChanged);
2841
2842 rc = VINF_SUCCESS;
2843 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2844 rc = SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], cbMsrs);
2845 else if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_0)
2846 {
2847 SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], 2 * sizeof(uint64_t)); /* Restore two MSRs. */
2848 rc = SSMR3Skip(pSSM, 62 * sizeof(uint64_t));
2849 }
2850 AssertRCReturn(rc, rc);
2851
2852 /* REM and other may have cleared must-be-one fields in DR6 and
2853 DR7, fix these. */
2854 pGstCtx->dr[6] &= ~(X86_DR6_RAZ_MASK | X86_DR6_MBZ_MASK);
2855 pGstCtx->dr[6] |= X86_DR6_RA1_MASK;
2856 pGstCtx->dr[7] &= ~(X86_DR7_RAZ_MASK | X86_DR7_MBZ_MASK);
2857 pGstCtx->dr[7] |= X86_DR7_RA1_MASK;
2858 }
2859
2860 /* Older states does not have the internal selector register flags
2861 and valid selector value. Supply those. */
2862 if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2863 {
2864 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2865 {
2866 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2867 bool const fValid = true /*!VM_IS_RAW_MODE_ENABLED(pVM)*/
2868 || ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2869 && !(pVCpu->cpum.s.fChanged & CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID));
2870 PCPUMSELREG paSelReg = CPUMCTX_FIRST_SREG(&pVCpu->cpum.s.Guest);
2871 if (fValid)
2872 {
2873 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2874 {
2875 paSelReg[iSelReg].fFlags = CPUMSELREG_FLAGS_VALID;
2876 paSelReg[iSelReg].ValidSel = paSelReg[iSelReg].Sel;
2877 }
2878
2879 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2880 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2881 }
2882 else
2883 {
2884 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2885 {
2886 paSelReg[iSelReg].fFlags = 0;
2887 paSelReg[iSelReg].ValidSel = 0;
2888 }
2889
2890 /* This might not be 104% correct, but I think it's close
2891 enough for all practical purposes... (REM always loaded
2892 LDTR registers.) */
2893 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2894 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2895 }
2896 pVCpu->cpum.s.Guest.tr.fFlags = CPUMSELREG_FLAGS_VALID;
2897 pVCpu->cpum.s.Guest.tr.ValidSel = pVCpu->cpum.s.Guest.tr.Sel;
2898 }
2899 }
2900
2901 /* Clear CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID. */
2902 if ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2903 && uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2904 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2905 {
2906 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2907 pVCpu->cpum.s.fChanged &= CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID;
2908 }
2909
2910 /*
2911 * A quick sanity check.
2912 */
2913 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2914 {
2915 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2916 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.es.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2917 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.cs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2918 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ss.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2919 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ds.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2920 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.fs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2921 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.gs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2922 }
2923 }
2924
2925 pVM->cpum.s.fPendingRestore = false;
2926
2927 /*
2928 * Guest CPUIDs (and VMX MSR features).
2929 */
2930 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_2)
2931 {
2932 CPUMMSRS GuestMsrs;
2933 RT_ZERO(GuestMsrs);
2934
2935 CPUMFEATURES BaseFeatures;
2936 bool const fVmxGstFeat = pVM->cpum.s.GuestFeatures.fVmx;
2937 if (fVmxGstFeat)
2938 {
2939 /*
2940 * At this point the MSRs in the guest CPU-context are loaded with the guest VMX MSRs from the saved state.
2941 * However the VMX sub-features have not been exploded yet. So cache the base (host derived) VMX features
2942 * here so we can compare them for compatibility after exploding guest features.
2943 */
2944 BaseFeatures = pVM->cpum.s.GuestFeatures;
2945
2946 /* Use the VMX MSR features from the saved state while exploding guest features. */
2947 GuestMsrs.hwvirt.vmx = pVM->apCpusR3[0]->cpum.s.Guest.hwvirt.vmx.Msrs;
2948 }
2949
2950 /* Load CPUID and explode guest features. */
2951 rc = cpumR3LoadCpuId(pVM, pSSM, uVersion, &GuestMsrs);
2952 if (fVmxGstFeat)
2953 {
2954 /*
2955 * Check if the exploded VMX features from the saved state are compatible with the host-derived features
2956 * we cached earlier (above). The is required if we use hardware-assisted nested-guest execution with
2957 * VMX features presented to the guest.
2958 */
2959 bool const fIsCompat = cpumR3AreVmxCpuFeaturesCompatible(pVM, &BaseFeatures, &pVM->cpum.s.GuestFeatures);
2960 if (!fIsCompat)
2961 return VERR_CPUM_INVALID_HWVIRT_FEAT_COMBO;
2962 }
2963 return rc;
2964 }
2965 return cpumR3LoadCpuIdPre32(pVM, pSSM, uVersion);
2966}
2967
2968
2969/**
2970 * @callback_method_impl{FNSSMINTLOADDONE}
2971 */
2972static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM)
2973{
2974 if (RT_FAILURE(SSMR3HandleGetStatus(pSSM)))
2975 return VINF_SUCCESS;
2976
2977 /* just check this since we can. */ /** @todo Add a SSM unit flag for indicating that it's mandatory during a restore. */
2978 if (pVM->cpum.s.fPendingRestore)
2979 {
2980 LogRel(("CPUM: Missing state!\n"));
2981 return VERR_INTERNAL_ERROR_2;
2982 }
2983
2984 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
2985 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2986 {
2987 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2988
2989 /* Notify PGM of the NXE states in case they've changed. */
2990 PGMNotifyNxeChanged(pVCpu, RT_BOOL(pVCpu->cpum.s.Guest.msrEFER & MSR_K6_EFER_NXE));
2991
2992 /* During init. this is done in CPUMR3InitCompleted(). */
2993 if (fSupportsLongMode)
2994 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
2995
2996 /* Recalc the CPUM_USE_DEBUG_REGS_HYPER value. */
2997 CPUMRecalcHyperDRx(pVCpu, UINT8_MAX);
2998 }
2999 return VINF_SUCCESS;
3000}
3001
3002
3003/**
3004 * Checks if the CPUM state restore is still pending.
3005 *
3006 * @returns true / false.
3007 * @param pVM The cross context VM structure.
3008 */
3009VMMDECL(bool) CPUMR3IsStateRestorePending(PVM pVM)
3010{
3011 return pVM->cpum.s.fPendingRestore;
3012}
3013
3014
3015/**
3016 * Formats the EFLAGS value into mnemonics.
3017 *
3018 * @param pszEFlags Where to write the mnemonics. (Assumes sufficient buffer space.)
3019 * @param efl The EFLAGS value.
3020 */
3021static void cpumR3InfoFormatFlags(char *pszEFlags, uint32_t efl)
3022{
3023 /*
3024 * Format the flags.
3025 */
3026 static const struct
3027 {
3028 const char *pszSet; const char *pszClear; uint32_t fFlag;
3029 } s_aFlags[] =
3030 {
3031 { "vip",NULL, X86_EFL_VIP },
3032 { "vif",NULL, X86_EFL_VIF },
3033 { "ac", NULL, X86_EFL_AC },
3034 { "vm", NULL, X86_EFL_VM },
3035 { "rf", NULL, X86_EFL_RF },
3036 { "nt", NULL, X86_EFL_NT },
3037 { "ov", "nv", X86_EFL_OF },
3038 { "dn", "up", X86_EFL_DF },
3039 { "ei", "di", X86_EFL_IF },
3040 { "tf", NULL, X86_EFL_TF },
3041 { "nt", "pl", X86_EFL_SF },
3042 { "nz", "zr", X86_EFL_ZF },
3043 { "ac", "na", X86_EFL_AF },
3044 { "po", "pe", X86_EFL_PF },
3045 { "cy", "nc", X86_EFL_CF },
3046 };
3047 char *psz = pszEFlags;
3048 for (unsigned i = 0; i < RT_ELEMENTS(s_aFlags); i++)
3049 {
3050 const char *pszAdd = s_aFlags[i].fFlag & efl ? s_aFlags[i].pszSet : s_aFlags[i].pszClear;
3051 if (pszAdd)
3052 {
3053 strcpy(psz, pszAdd);
3054 psz += strlen(pszAdd);
3055 *psz++ = ' ';
3056 }
3057 }
3058 psz[-1] = '\0';
3059}
3060
3061
3062/**
3063 * Formats a full register dump.
3064 *
3065 * @param pVM The cross context VM structure.
3066 * @param pCtx The context to format.
3067 * @param pCtxCore The context core to format.
3068 * @param pHlp Output functions.
3069 * @param enmType The dump type.
3070 * @param pszPrefix Register name prefix.
3071 */
3072static void cpumR3InfoOne(PVM pVM, PCPUMCTX pCtx, PCCPUMCTXCORE pCtxCore, PCDBGFINFOHLP pHlp, CPUMDUMPTYPE enmType,
3073 const char *pszPrefix)
3074{
3075 NOREF(pVM);
3076
3077 /*
3078 * Format the EFLAGS.
3079 */
3080 uint32_t efl = pCtxCore->eflags.u32;
3081 char szEFlags[80];
3082 cpumR3InfoFormatFlags(&szEFlags[0], efl);
3083
3084 /*
3085 * Format the registers.
3086 */
3087 switch (enmType)
3088 {
3089 case CPUMDUMPTYPE_TERSE:
3090 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3091 pHlp->pfnPrintf(pHlp,
3092 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3093 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3094 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3095 "%sr14=%016RX64 %sr15=%016RX64\n"
3096 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3097 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
3098 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3099 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3100 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3101 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3102 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3103 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
3104 else
3105 pHlp->pfnPrintf(pHlp,
3106 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3107 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3108 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
3109 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3110 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3111 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3112 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
3113 break;
3114
3115 case CPUMDUMPTYPE_DEFAULT:
3116 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3117 pHlp->pfnPrintf(pHlp,
3118 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3119 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3120 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3121 "%sr14=%016RX64 %sr15=%016RX64\n"
3122 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3123 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
3124 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%016RX64:%04x %sldtr=%04x\n"
3125 ,
3126 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3127 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3128 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3129 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3130 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3131 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
3132 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3133 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
3134 else
3135 pHlp->pfnPrintf(pHlp,
3136 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3137 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3138 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
3139 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%08RX64:%04x %sldtr=%04x\n"
3140 ,
3141 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3142 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3143 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3144 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
3145 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3146 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
3147 break;
3148
3149 case CPUMDUMPTYPE_VERBOSE:
3150 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3151 pHlp->pfnPrintf(pHlp,
3152 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3153 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3154 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3155 "%sr14=%016RX64 %sr15=%016RX64\n"
3156 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3157 "%scs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3158 "%sds={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3159 "%ses={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3160 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3161 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3162 "%sss={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3163 "%scr0=%016RX64 %scr2=%016RX64 %scr3=%016RX64 %scr4=%016RX64\n"
3164 "%sdr0=%016RX64 %sdr1=%016RX64 %sdr2=%016RX64 %sdr3=%016RX64\n"
3165 "%sdr4=%016RX64 %sdr5=%016RX64 %sdr6=%016RX64 %sdr7=%016RX64\n"
3166 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
3167 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3168 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3169 "%sSysEnter={cs=%04llx eip=%016RX64 esp=%016RX64}\n"
3170 ,
3171 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3172 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3173 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3174 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3175 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u,
3176 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u,
3177 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u,
3178 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u,
3179 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u,
3180 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u,
3181 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3182 pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1], pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
3183 pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5], pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
3184 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
3185 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
3186 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
3187 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3188 else
3189 pHlp->pfnPrintf(pHlp,
3190 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3191 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3192 "%scs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr0=%08RX64 %sdr1=%08RX64\n"
3193 "%sds={%04x base=%016RX64 limit=%08x flags=%08x} %sdr2=%08RX64 %sdr3=%08RX64\n"
3194 "%ses={%04x base=%016RX64 limit=%08x flags=%08x} %sdr4=%08RX64 %sdr5=%08RX64\n"
3195 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr6=%08RX64 %sdr7=%08RX64\n"
3196 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x} %scr0=%08RX64 %scr2=%08RX64\n"
3197 "%sss={%04x base=%016RX64 limit=%08x flags=%08x} %scr3=%08RX64 %scr4=%08RX64\n"
3198 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
3199 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3200 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3201 "%sSysEnter={cs=%04llx eip=%08llx esp=%08llx}\n"
3202 ,
3203 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3204 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3205 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u, pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1],
3206 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u, pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
3207 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u, pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5],
3208 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u, pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
3209 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u, pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2,
3210 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3211 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
3212 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
3213 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
3214 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3215
3216 pHlp->pfnPrintf(pHlp, "%sxcr=%016RX64 %sxcr1=%016RX64 %sxss=%016RX64 (fXStateMask=%016RX64)\n",
3217 pszPrefix, pCtx->aXcr[0], pszPrefix, pCtx->aXcr[1],
3218 pszPrefix, UINT64_C(0) /** @todo XSS */, pCtx->fXStateMask);
3219 {
3220 PX86FXSTATE pFpuCtx = &pCtx->XState.x87;
3221 pHlp->pfnPrintf(pHlp,
3222 "%sFCW=%04x %sFSW=%04x %sFTW=%04x %sFOP=%04x %sMXCSR=%08x %sMXCSR_MASK=%08x\n"
3223 "%sFPUIP=%08x %sCS=%04x %sRsrvd1=%04x %sFPUDP=%08x %sDS=%04x %sRsvrd2=%04x\n"
3224 ,
3225 pszPrefix, pFpuCtx->FCW, pszPrefix, pFpuCtx->FSW, pszPrefix, pFpuCtx->FTW, pszPrefix, pFpuCtx->FOP,
3226 pszPrefix, pFpuCtx->MXCSR, pszPrefix, pFpuCtx->MXCSR_MASK,
3227 pszPrefix, pFpuCtx->FPUIP, pszPrefix, pFpuCtx->CS, pszPrefix, pFpuCtx->Rsrvd1,
3228 pszPrefix, pFpuCtx->FPUDP, pszPrefix, pFpuCtx->DS, pszPrefix, pFpuCtx->Rsrvd2
3229 );
3230 /*
3231 * The FSAVE style memory image contains ST(0)-ST(7) at increasing addresses,
3232 * not (FP)R0-7 as Intel SDM suggests.
3233 */
3234 unsigned iShift = (pFpuCtx->FSW >> 11) & 7;
3235 for (unsigned iST = 0; iST < RT_ELEMENTS(pFpuCtx->aRegs); iST++)
3236 {
3237 unsigned iFPR = (iST + iShift) % RT_ELEMENTS(pFpuCtx->aRegs);
3238 unsigned uTag = (pFpuCtx->FTW >> (2 * iFPR)) & 3;
3239 char chSign = pFpuCtx->aRegs[iST].au16[4] & 0x8000 ? '-' : '+';
3240 unsigned iInteger = (unsigned)(pFpuCtx->aRegs[iST].au64[0] >> 63);
3241 uint64_t u64Fraction = pFpuCtx->aRegs[iST].au64[0] & UINT64_C(0x7fffffffffffffff);
3242 int iExponent = pFpuCtx->aRegs[iST].au16[4] & 0x7fff;
3243 iExponent -= 16383; /* subtract bias */
3244 /** @todo This isn't entirenly correct and needs more work! */
3245 pHlp->pfnPrintf(pHlp,
3246 "%sST(%u)=%sFPR%u={%04RX16'%08RX32'%08RX32} t%d %c%u.%022llu * 2 ^ %d (*)",
3247 pszPrefix, iST, pszPrefix, iFPR,
3248 pFpuCtx->aRegs[iST].au16[4], pFpuCtx->aRegs[iST].au32[1], pFpuCtx->aRegs[iST].au32[0],
3249 uTag, chSign, iInteger, u64Fraction, iExponent);
3250 if (pFpuCtx->aRegs[iST].au16[5] || pFpuCtx->aRegs[iST].au16[6] || pFpuCtx->aRegs[iST].au16[7])
3251 pHlp->pfnPrintf(pHlp, " res={%04RX16,%04RX16,%04RX16}\n",
3252 pFpuCtx->aRegs[iST].au16[5], pFpuCtx->aRegs[iST].au16[6], pFpuCtx->aRegs[iST].au16[7]);
3253 else
3254 pHlp->pfnPrintf(pHlp, "\n");
3255 }
3256
3257 /* XMM/YMM/ZMM registers. */
3258 if (pCtx->fXStateMask & XSAVE_C_YMM)
3259 {
3260 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
3261 if (!(pCtx->fXStateMask & XSAVE_C_ZMM_HI256))
3262 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3263 pHlp->pfnPrintf(pHlp, "%sYMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3264 pszPrefix, i, i < 10 ? " " : "",
3265 pYmmHiCtx->aYmmHi[i].au32[3],
3266 pYmmHiCtx->aYmmHi[i].au32[2],
3267 pYmmHiCtx->aYmmHi[i].au32[1],
3268 pYmmHiCtx->aYmmHi[i].au32[0],
3269 pFpuCtx->aXMM[i].au32[3],
3270 pFpuCtx->aXMM[i].au32[2],
3271 pFpuCtx->aXMM[i].au32[1],
3272 pFpuCtx->aXMM[i].au32[0]);
3273 else
3274 {
3275 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
3276 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3277 pHlp->pfnPrintf(pHlp,
3278 "%sZMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3279 pszPrefix, i, i < 10 ? " " : "",
3280 pZmmHi256->aHi256Regs[i].au32[7],
3281 pZmmHi256->aHi256Regs[i].au32[6],
3282 pZmmHi256->aHi256Regs[i].au32[5],
3283 pZmmHi256->aHi256Regs[i].au32[4],
3284 pZmmHi256->aHi256Regs[i].au32[3],
3285 pZmmHi256->aHi256Regs[i].au32[2],
3286 pZmmHi256->aHi256Regs[i].au32[1],
3287 pZmmHi256->aHi256Regs[i].au32[0],
3288 pYmmHiCtx->aYmmHi[i].au32[3],
3289 pYmmHiCtx->aYmmHi[i].au32[2],
3290 pYmmHiCtx->aYmmHi[i].au32[1],
3291 pYmmHiCtx->aYmmHi[i].au32[0],
3292 pFpuCtx->aXMM[i].au32[3],
3293 pFpuCtx->aXMM[i].au32[2],
3294 pFpuCtx->aXMM[i].au32[1],
3295 pFpuCtx->aXMM[i].au32[0]);
3296
3297 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
3298 for (unsigned i = 0; i < RT_ELEMENTS(pZmm16Hi->aRegs); i++)
3299 pHlp->pfnPrintf(pHlp,
3300 "%sZMM%u=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3301 pszPrefix, i + 16,
3302 pZmm16Hi->aRegs[i].au32[15],
3303 pZmm16Hi->aRegs[i].au32[14],
3304 pZmm16Hi->aRegs[i].au32[13],
3305 pZmm16Hi->aRegs[i].au32[12],
3306 pZmm16Hi->aRegs[i].au32[11],
3307 pZmm16Hi->aRegs[i].au32[10],
3308 pZmm16Hi->aRegs[i].au32[9],
3309 pZmm16Hi->aRegs[i].au32[8],
3310 pZmm16Hi->aRegs[i].au32[7],
3311 pZmm16Hi->aRegs[i].au32[6],
3312 pZmm16Hi->aRegs[i].au32[5],
3313 pZmm16Hi->aRegs[i].au32[4],
3314 pZmm16Hi->aRegs[i].au32[3],
3315 pZmm16Hi->aRegs[i].au32[2],
3316 pZmm16Hi->aRegs[i].au32[1],
3317 pZmm16Hi->aRegs[i].au32[0]);
3318 }
3319 }
3320 else
3321 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3322 pHlp->pfnPrintf(pHlp,
3323 i & 1
3324 ? "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32\n"
3325 : "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32 ",
3326 pszPrefix, i, i < 10 ? " " : "",
3327 pFpuCtx->aXMM[i].au32[3],
3328 pFpuCtx->aXMM[i].au32[2],
3329 pFpuCtx->aXMM[i].au32[1],
3330 pFpuCtx->aXMM[i].au32[0]);
3331
3332 if (pCtx->fXStateMask & XSAVE_C_OPMASK)
3333 {
3334 PCX86XSAVEOPMASK pOpMask = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_OPMASK_BIT, PCX86XSAVEOPMASK);
3335 for (unsigned i = 0; i < RT_ELEMENTS(pOpMask->aKRegs); i += 4)
3336 pHlp->pfnPrintf(pHlp, "%sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64\n",
3337 pszPrefix, i + 0, pOpMask->aKRegs[i + 0],
3338 pszPrefix, i + 1, pOpMask->aKRegs[i + 1],
3339 pszPrefix, i + 2, pOpMask->aKRegs[i + 2],
3340 pszPrefix, i + 3, pOpMask->aKRegs[i + 3]);
3341 }
3342
3343 if (pCtx->fXStateMask & XSAVE_C_BNDREGS)
3344 {
3345 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
3346 for (unsigned i = 0; i < RT_ELEMENTS(pBndRegs->aRegs); i += 2)
3347 pHlp->pfnPrintf(pHlp, "%sBNDREG%u=%016RX64/%016RX64 %sBNDREG%u=%016RX64/%016RX64\n",
3348 pszPrefix, i, pBndRegs->aRegs[i].uLowerBound, pBndRegs->aRegs[i].uUpperBound,
3349 pszPrefix, i + 1, pBndRegs->aRegs[i + 1].uLowerBound, pBndRegs->aRegs[i + 1].uUpperBound);
3350 }
3351
3352 if (pCtx->fXStateMask & XSAVE_C_BNDCSR)
3353 {
3354 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
3355 pHlp->pfnPrintf(pHlp, "%sBNDCFG.CONFIG=%016RX64 %sBNDCFG.STATUS=%016RX64\n",
3356 pszPrefix, pBndCfg->fConfig, pszPrefix, pBndCfg->fStatus);
3357 }
3358
3359 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->au32RsrvdRest); i++)
3360 if (pFpuCtx->au32RsrvdRest[i])
3361 pHlp->pfnPrintf(pHlp, "%sRsrvdRest[%u]=%RX32 (offset=%#x)\n",
3362 pszPrefix, i, pFpuCtx->au32RsrvdRest[i], RT_UOFFSETOF_DYN(X86FXSTATE, au32RsrvdRest[i]) );
3363 }
3364
3365 pHlp->pfnPrintf(pHlp,
3366 "%sEFER =%016RX64\n"
3367 "%sPAT =%016RX64\n"
3368 "%sSTAR =%016RX64\n"
3369 "%sCSTAR =%016RX64\n"
3370 "%sLSTAR =%016RX64\n"
3371 "%sSFMASK =%016RX64\n"
3372 "%sKERNELGSBASE =%016RX64\n",
3373 pszPrefix, pCtx->msrEFER,
3374 pszPrefix, pCtx->msrPAT,
3375 pszPrefix, pCtx->msrSTAR,
3376 pszPrefix, pCtx->msrCSTAR,
3377 pszPrefix, pCtx->msrLSTAR,
3378 pszPrefix, pCtx->msrSFMASK,
3379 pszPrefix, pCtx->msrKERNELGSBASE);
3380
3381 if (CPUMIsGuestInPAEModeEx(pCtx))
3382 for (unsigned i = 0; i < RT_ELEMENTS(pCtx->aPaePdpes); i++)
3383 pHlp->pfnPrintf(pHlp, "%sPAE PDPTE %u =%016RX64\n", pszPrefix, i, pCtx->aPaePdpes[i]);
3384 break;
3385 }
3386}
3387
3388
3389/**
3390 * Display all cpu states and any other cpum info.
3391 *
3392 * @param pVM The cross context VM structure.
3393 * @param pHlp The info helper functions.
3394 * @param pszArgs Arguments, ignored.
3395 */
3396static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3397{
3398 cpumR3InfoGuest(pVM, pHlp, pszArgs);
3399 cpumR3InfoGuestInstr(pVM, pHlp, pszArgs);
3400 cpumR3InfoGuestHwvirt(pVM, pHlp, pszArgs);
3401 cpumR3InfoHyper(pVM, pHlp, pszArgs);
3402 cpumR3InfoHost(pVM, pHlp, pszArgs);
3403}
3404
3405
3406/**
3407 * Parses the info argument.
3408 *
3409 * The argument starts with 'verbose', 'terse' or 'default' and then
3410 * continues with the comment string.
3411 *
3412 * @param pszArgs The pointer to the argument string.
3413 * @param penmType Where to store the dump type request.
3414 * @param ppszComment Where to store the pointer to the comment string.
3415 */
3416static void cpumR3InfoParseArg(const char *pszArgs, CPUMDUMPTYPE *penmType, const char **ppszComment)
3417{
3418 if (!pszArgs)
3419 {
3420 *penmType = CPUMDUMPTYPE_DEFAULT;
3421 *ppszComment = "";
3422 }
3423 else
3424 {
3425 if (!strncmp(pszArgs, RT_STR_TUPLE("verbose")))
3426 {
3427 pszArgs += 7;
3428 *penmType = CPUMDUMPTYPE_VERBOSE;
3429 }
3430 else if (!strncmp(pszArgs, RT_STR_TUPLE("terse")))
3431 {
3432 pszArgs += 5;
3433 *penmType = CPUMDUMPTYPE_TERSE;
3434 }
3435 else if (!strncmp(pszArgs, RT_STR_TUPLE("default")))
3436 {
3437 pszArgs += 7;
3438 *penmType = CPUMDUMPTYPE_DEFAULT;
3439 }
3440 else
3441 *penmType = CPUMDUMPTYPE_DEFAULT;
3442 *ppszComment = RTStrStripL(pszArgs);
3443 }
3444}
3445
3446
3447/**
3448 * Display the guest cpu state.
3449 *
3450 * @param pVM The cross context VM structure.
3451 * @param pHlp The info helper functions.
3452 * @param pszArgs Arguments.
3453 */
3454static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3455{
3456 CPUMDUMPTYPE enmType;
3457 const char *pszComment;
3458 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3459
3460 PVMCPU pVCpu = VMMGetCpu(pVM);
3461 if (!pVCpu)
3462 pVCpu = pVM->apCpusR3[0];
3463
3464 pHlp->pfnPrintf(pHlp, "Guest CPUM (VCPU %d) state: %s\n", pVCpu->idCpu, pszComment);
3465
3466 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
3467 cpumR3InfoOne(pVM, pCtx, CPUMCTX2CORE(pCtx), pHlp, enmType, "");
3468}
3469
3470
3471/**
3472 * Displays an SVM VMCB control area.
3473 *
3474 * @param pHlp The info helper functions.
3475 * @param pVmcbCtrl Pointer to a SVM VMCB controls area.
3476 * @param pszPrefix Caller specified string prefix.
3477 */
3478static void cpumR3InfoSvmVmcbCtrl(PCDBGFINFOHLP pHlp, PCSVMVMCBCTRL pVmcbCtrl, const char *pszPrefix)
3479{
3480 AssertReturnVoid(pHlp);
3481 AssertReturnVoid(pVmcbCtrl);
3482
3483 pHlp->pfnPrintf(pHlp, "%sCRX-read intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdCRx);
3484 pHlp->pfnPrintf(pHlp, "%sCRX-write intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrCRx);
3485 pHlp->pfnPrintf(pHlp, "%sDRX-read intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdDRx);
3486 pHlp->pfnPrintf(pHlp, "%sDRX-write intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrDRx);
3487 pHlp->pfnPrintf(pHlp, "%sException intercepts = %#RX32\n", pszPrefix, pVmcbCtrl->u32InterceptXcpt);
3488 pHlp->pfnPrintf(pHlp, "%sControl intercepts = %#RX64\n", pszPrefix, pVmcbCtrl->u64InterceptCtrl);
3489 pHlp->pfnPrintf(pHlp, "%sPause-filter threshold = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterThreshold);
3490 pHlp->pfnPrintf(pHlp, "%sPause-filter count = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterCount);
3491 pHlp->pfnPrintf(pHlp, "%sIOPM bitmap physaddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64IOPMPhysAddr);
3492 pHlp->pfnPrintf(pHlp, "%sMSRPM bitmap physaddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64MSRPMPhysAddr);
3493 pHlp->pfnPrintf(pHlp, "%sTSC offset = %#RX64\n", pszPrefix, pVmcbCtrl->u64TSCOffset);
3494 pHlp->pfnPrintf(pHlp, "%sTLB Control\n", pszPrefix);
3495 pHlp->pfnPrintf(pHlp, " %sASID = %#RX32\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u32ASID);
3496 pHlp->pfnPrintf(pHlp, " %sTLB-flush type = %u\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u8TLBFlush);
3497 pHlp->pfnPrintf(pHlp, "%sInterrupt Control\n", pszPrefix);
3498 pHlp->pfnPrintf(pHlp, " %sVTPR = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VTPR, pVmcbCtrl->IntCtrl.n.u8VTPR);
3499 pHlp->pfnPrintf(pHlp, " %sVIRQ (Pending) = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIrqPending);
3500 pHlp->pfnPrintf(pHlp, " %sVINTR vector = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VIntrVector);
3501 pHlp->pfnPrintf(pHlp, " %sVGIF = %u\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGif);
3502 pHlp->pfnPrintf(pHlp, " %sVINTR priority = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u4VIntrPrio);
3503 pHlp->pfnPrintf(pHlp, " %sIgnore TPR = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1IgnoreTPR);
3504 pHlp->pfnPrintf(pHlp, " %sVINTR masking = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIntrMasking);
3505 pHlp->pfnPrintf(pHlp, " %sVGIF enable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGifEnable);
3506 pHlp->pfnPrintf(pHlp, " %sAVIC enable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1AvicEnable);
3507 pHlp->pfnPrintf(pHlp, "%sInterrupt Shadow\n", pszPrefix);
3508 pHlp->pfnPrintf(pHlp, " %sInterrupt shadow = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1IntShadow);
3509 pHlp->pfnPrintf(pHlp, " %sGuest-interrupt Mask = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1GuestIntMask);
3510 pHlp->pfnPrintf(pHlp, "%sExit Code = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitCode);
3511 pHlp->pfnPrintf(pHlp, "%sEXITINFO1 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo1);
3512 pHlp->pfnPrintf(pHlp, "%sEXITINFO2 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo2);
3513 pHlp->pfnPrintf(pHlp, "%sExit Interrupt Info\n", pszPrefix);
3514 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1Valid);
3515 pHlp->pfnPrintf(pHlp, " %sVector = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u8Vector, pVmcbCtrl->ExitIntInfo.n.u8Vector);
3516 pHlp->pfnPrintf(pHlp, " %sType = %u\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u3Type);
3517 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1ErrorCodeValid);
3518 pHlp->pfnPrintf(pHlp, " %sError-code = %#RX32\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u32ErrorCode);
3519 pHlp->pfnPrintf(pHlp, "%sNested paging and SEV\n", pszPrefix);
3520 pHlp->pfnPrintf(pHlp, " %sNested paging = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1NestedPaging);
3521 pHlp->pfnPrintf(pHlp, " %sSEV (Secure Encrypted VM) = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1Sev);
3522 pHlp->pfnPrintf(pHlp, " %sSEV-ES (Encrypted State) = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1SevEs);
3523 pHlp->pfnPrintf(pHlp, "%sEvent Inject\n", pszPrefix);
3524 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1Valid);
3525 pHlp->pfnPrintf(pHlp, " %sVector = %#RX32 (%u)\n", pszPrefix, pVmcbCtrl->EventInject.n.u8Vector, pVmcbCtrl->EventInject.n.u8Vector);
3526 pHlp->pfnPrintf(pHlp, " %sType = %u\n", pszPrefix, pVmcbCtrl->EventInject.n.u3Type);
3527 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1ErrorCodeValid);
3528 pHlp->pfnPrintf(pHlp, " %sError-code = %#RX32\n", pszPrefix, pVmcbCtrl->EventInject.n.u32ErrorCode);
3529 pHlp->pfnPrintf(pHlp, "%sNested-paging CR3 = %#RX64\n", pszPrefix, pVmcbCtrl->u64NestedPagingCR3);
3530 pHlp->pfnPrintf(pHlp, "%sLBR Virtualization\n", pszPrefix);
3531 pHlp->pfnPrintf(pHlp, " %sLBR virt = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1LbrVirt);
3532 pHlp->pfnPrintf(pHlp, " %sVirt. VMSAVE/VMLOAD = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1VirtVmsaveVmload);
3533 pHlp->pfnPrintf(pHlp, "%sVMCB Clean Bits = %#RX32\n", pszPrefix, pVmcbCtrl->u32VmcbCleanBits);
3534 pHlp->pfnPrintf(pHlp, "%sNext-RIP = %#RX64\n", pszPrefix, pVmcbCtrl->u64NextRIP);
3535 pHlp->pfnPrintf(pHlp, "%sInstruction bytes fetched = %u\n", pszPrefix, pVmcbCtrl->cbInstrFetched);
3536 pHlp->pfnPrintf(pHlp, "%sInstruction bytes = %.*Rhxs\n", pszPrefix, sizeof(pVmcbCtrl->abInstr), pVmcbCtrl->abInstr);
3537 pHlp->pfnPrintf(pHlp, "%sAVIC\n", pszPrefix);
3538 pHlp->pfnPrintf(pHlp, " %sBar addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBar.n.u40Addr);
3539 pHlp->pfnPrintf(pHlp, " %sBacking page addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBackingPagePtr.n.u40Addr);
3540 pHlp->pfnPrintf(pHlp, " %sLogical table addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicLogicalTablePtr.n.u40Addr);
3541 pHlp->pfnPrintf(pHlp, " %sPhysical table addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u40Addr);
3542 pHlp->pfnPrintf(pHlp, " %sLast guest core Id = %u\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u8LastGuestCoreId);
3543}
3544
3545
3546/**
3547 * Helper for dumping the SVM VMCB selector registers.
3548 *
3549 * @param pHlp The info helper functions.
3550 * @param pSel Pointer to the SVM selector register.
3551 * @param pszName Name of the selector.
3552 * @param pszPrefix Caller specified string prefix.
3553 */
3554DECLINLINE(void) cpumR3InfoSvmVmcbSelReg(PCDBGFINFOHLP pHlp, PCSVMSELREG pSel, const char *pszName, const char *pszPrefix)
3555{
3556 /* The string width of 4 used below is to handle 'LDTR'. Change later if longer register names are used. */
3557 pHlp->pfnPrintf(pHlp, "%s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", pszPrefix,
3558 pszName, pSel->u16Sel, pSel->u64Base, pSel->u32Limit, pSel->u16Attr);
3559}
3560
3561
3562/**
3563 * Helper for dumping the SVM VMCB GDTR/IDTR registers.
3564 *
3565 * @param pHlp The info helper functions.
3566 * @param pXdtr Pointer to the descriptor table register.
3567 * @param pszName Name of the descriptor table register.
3568 * @param pszPrefix Caller specified string prefix.
3569 */
3570DECLINLINE(void) cpumR3InfoSvmVmcbXdtr(PCDBGFINFOHLP pHlp, PCSVMXDTR pXdtr, const char *pszName, const char *pszPrefix)
3571{
3572 /* The string width of 4 used below is to cover 'GDTR', 'IDTR'. Change later if longer register names are used. */
3573 pHlp->pfnPrintf(pHlp, "%s%-4s = %016RX64:%04x\n", pszPrefix, pszName, pXdtr->u64Base, pXdtr->u32Limit);
3574}
3575
3576
3577/**
3578 * Displays an SVM VMCB state-save area.
3579 *
3580 * @param pHlp The info helper functions.
3581 * @param pVmcbStateSave Pointer to a SVM VMCB controls area.
3582 * @param pszPrefix Caller specified string prefix.
3583 */
3584static void cpumR3InfoSvmVmcbStateSave(PCDBGFINFOHLP pHlp, PCSVMVMCBSTATESAVE pVmcbStateSave, const char *pszPrefix)
3585{
3586 AssertReturnVoid(pHlp);
3587 AssertReturnVoid(pVmcbStateSave);
3588
3589 char szEFlags[80];
3590 cpumR3InfoFormatFlags(&szEFlags[0], pVmcbStateSave->u64RFlags);
3591
3592 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->CS, "CS", pszPrefix);
3593 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->SS, "SS", pszPrefix);
3594 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->ES, "ES", pszPrefix);
3595 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->DS, "DS", pszPrefix);
3596 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->FS, "FS", pszPrefix);
3597 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->GS, "GS", pszPrefix);
3598 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->LDTR, "LDTR", pszPrefix);
3599 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->TR, "TR", pszPrefix);
3600 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->GDTR, "GDTR", pszPrefix);
3601 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->IDTR, "IDTR", pszPrefix);
3602 pHlp->pfnPrintf(pHlp, "%sCPL = %u\n", pszPrefix, pVmcbStateSave->u8CPL);
3603 pHlp->pfnPrintf(pHlp, "%sEFER = %#RX64\n", pszPrefix, pVmcbStateSave->u64EFER);
3604 pHlp->pfnPrintf(pHlp, "%sCR4 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR4);
3605 pHlp->pfnPrintf(pHlp, "%sCR3 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR3);
3606 pHlp->pfnPrintf(pHlp, "%sCR0 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR0);
3607 pHlp->pfnPrintf(pHlp, "%sDR7 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR7);
3608 pHlp->pfnPrintf(pHlp, "%sDR6 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR6);
3609 pHlp->pfnPrintf(pHlp, "%sRFLAGS = %#RX64 %31s\n", pszPrefix, pVmcbStateSave->u64RFlags, szEFlags);
3610 pHlp->pfnPrintf(pHlp, "%sRIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RIP);
3611 pHlp->pfnPrintf(pHlp, "%sRSP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RSP);
3612 pHlp->pfnPrintf(pHlp, "%sRAX = %#RX64\n", pszPrefix, pVmcbStateSave->u64RAX);
3613 pHlp->pfnPrintf(pHlp, "%sSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64STAR);
3614 pHlp->pfnPrintf(pHlp, "%sLSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64LSTAR);
3615 pHlp->pfnPrintf(pHlp, "%sCSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64CSTAR);
3616 pHlp->pfnPrintf(pHlp, "%sSFMASK = %#RX64\n", pszPrefix, pVmcbStateSave->u64SFMASK);
3617 pHlp->pfnPrintf(pHlp, "%sKERNELGSBASE = %#RX64\n", pszPrefix, pVmcbStateSave->u64KernelGSBase);
3618 pHlp->pfnPrintf(pHlp, "%sSysEnter CS = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterCS);
3619 pHlp->pfnPrintf(pHlp, "%sSysEnter EIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterEIP);
3620 pHlp->pfnPrintf(pHlp, "%sSysEnter ESP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterESP);
3621 pHlp->pfnPrintf(pHlp, "%sCR2 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR2);
3622 pHlp->pfnPrintf(pHlp, "%sPAT = %#RX64\n", pszPrefix, pVmcbStateSave->u64PAT);
3623 pHlp->pfnPrintf(pHlp, "%sDBGCTL = %#RX64\n", pszPrefix, pVmcbStateSave->u64DBGCTL);
3624 pHlp->pfnPrintf(pHlp, "%sBR_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_FROM);
3625 pHlp->pfnPrintf(pHlp, "%sBR_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_TO);
3626 pHlp->pfnPrintf(pHlp, "%sLASTXCPT_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPFROM);
3627 pHlp->pfnPrintf(pHlp, "%sLASTXCPT_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPTO);
3628}
3629
3630
3631/**
3632 * Displays a virtual-VMCS.
3633 *
3634 * @param pVCpu The cross context virtual CPU structure.
3635 * @param pHlp The info helper functions.
3636 * @param pVmcs Pointer to a virtual VMCS.
3637 * @param pszPrefix Caller specified string prefix.
3638 */
3639static void cpumR3InfoVmxVmcs(PVMCPU pVCpu, PCDBGFINFOHLP pHlp, PCVMXVVMCS pVmcs, const char *pszPrefix)
3640{
3641 AssertReturnVoid(pHlp);
3642 AssertReturnVoid(pVmcs);
3643
3644 /* The string width of -4 used in the macros below to cover 'LDTR', 'GDTR', 'IDTR. */
3645#define CPUMVMX_DUMP_HOST_XDTR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3646 do { \
3647 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {base=%016RX64}\n", \
3648 (a_pszPrefix), (a_SegName), (a_pVmcs)->u64Host##a_Seg##Base.u); \
3649 } while (0)
3650
3651#define CPUMVMX_DUMP_HOST_FS_GS_TR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3652 do { \
3653 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {%04x base=%016RX64}\n", \
3654 (a_pszPrefix), (a_SegName), (a_pVmcs)->Host##a_Seg, (a_pVmcs)->u64Host##a_Seg##Base.u); \
3655 } while (0)
3656
3657#define CPUMVMX_DUMP_GUEST_SEGREG(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3658 do { \
3659 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", \
3660 (a_pszPrefix), (a_SegName), (a_pVmcs)->Guest##a_Seg, (a_pVmcs)->u64Guest##a_Seg##Base.u, \
3661 (a_pVmcs)->u32Guest##a_Seg##Limit, (a_pVmcs)->u32Guest##a_Seg##Attr); \
3662 } while (0)
3663
3664#define CPUMVMX_DUMP_GUEST_XDTR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3665 do { \
3666 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {base=%016RX64 limit=%08x}\n", \
3667 (a_pszPrefix), (a_SegName), (a_pVmcs)->u64Guest##a_Seg##Base.u, (a_pVmcs)->u32Guest##a_Seg##Limit); \
3668 } while (0)
3669
3670 /* Header. */
3671 {
3672 pHlp->pfnPrintf(pHlp, "%sHeader:\n", pszPrefix);
3673 pHlp->pfnPrintf(pHlp, " %sVMCS revision id = %#RX32\n", pszPrefix, pVmcs->u32VmcsRevId);
3674 pHlp->pfnPrintf(pHlp, " %sVMX-abort id = %#RX32 (%s)\n", pszPrefix, pVmcs->enmVmxAbort, VMXGetAbortDesc(pVmcs->enmVmxAbort));
3675 pHlp->pfnPrintf(pHlp, " %sVMCS state = %#x (%s)\n", pszPrefix, pVmcs->fVmcsState, VMXGetVmcsStateDesc(pVmcs->fVmcsState));
3676 }
3677
3678 /* Control fields. */
3679 {
3680 /* 16-bit. */
3681 pHlp->pfnPrintf(pHlp, "%sControl:\n", pszPrefix);
3682 pHlp->pfnPrintf(pHlp, " %sVPID = %#RX16\n", pszPrefix, pVmcs->u16Vpid);
3683 pHlp->pfnPrintf(pHlp, " %sPosted intr notify vector = %#RX16\n", pszPrefix, pVmcs->u16PostIntNotifyVector);
3684 pHlp->pfnPrintf(pHlp, " %sEPTP index = %#RX16\n", pszPrefix, pVmcs->u16EptpIndex);
3685
3686 /* 32-bit. */
3687 pHlp->pfnPrintf(pHlp, " %sPin ctls = %#RX32\n", pszPrefix, pVmcs->u32PinCtls);
3688 pHlp->pfnPrintf(pHlp, " %sProcessor ctls = %#RX32\n", pszPrefix, pVmcs->u32ProcCtls);
3689 pHlp->pfnPrintf(pHlp, " %sSecondary processor ctls = %#RX32\n", pszPrefix, pVmcs->u32ProcCtls2);
3690 pHlp->pfnPrintf(pHlp, " %sVM-exit ctls = %#RX32\n", pszPrefix, pVmcs->u32ExitCtls);
3691 pHlp->pfnPrintf(pHlp, " %sVM-entry ctls = %#RX32\n", pszPrefix, pVmcs->u32EntryCtls);
3692 pHlp->pfnPrintf(pHlp, " %sException bitmap = %#RX32\n", pszPrefix, pVmcs->u32XcptBitmap);
3693 pHlp->pfnPrintf(pHlp, " %sPage-fault mask = %#RX32\n", pszPrefix, pVmcs->u32XcptPFMask);
3694 pHlp->pfnPrintf(pHlp, " %sPage-fault match = %#RX32\n", pszPrefix, pVmcs->u32XcptPFMatch);
3695 pHlp->pfnPrintf(pHlp, " %sCR3-target count = %RU32\n", pszPrefix, pVmcs->u32Cr3TargetCount);
3696 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR store count = %RU32\n", pszPrefix, pVmcs->u32ExitMsrStoreCount);
3697 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR load count = %RU32\n", pszPrefix, pVmcs->u32ExitMsrLoadCount);
3698 pHlp->pfnPrintf(pHlp, " %sVM-entry MSR load count = %RU32\n", pszPrefix, pVmcs->u32EntryMsrLoadCount);
3699 pHlp->pfnPrintf(pHlp, " %sVM-entry interruption info = %#RX32\n", pszPrefix, pVmcs->u32EntryIntInfo);
3700 {
3701 uint32_t const fInfo = pVmcs->u32EntryIntInfo;
3702 uint8_t const uType = VMX_ENTRY_INT_INFO_TYPE(fInfo);
3703 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_VALID(fInfo));
3704 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetEntryIntInfoTypeDesc(uType));
3705 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_ENTRY_INT_INFO_VECTOR(fInfo));
3706 pHlp->pfnPrintf(pHlp, " %sNMI-unblocking-IRET = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_NMI_UNBLOCK_IRET(fInfo));
3707 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_ERROR_CODE_VALID(fInfo));
3708 }
3709 pHlp->pfnPrintf(pHlp, " %sVM-entry xcpt error-code = %#RX32\n", pszPrefix, pVmcs->u32EntryXcptErrCode);
3710 pHlp->pfnPrintf(pHlp, " %sVM-entry instr length = %u byte(s)\n", pszPrefix, pVmcs->u32EntryInstrLen);
3711 pHlp->pfnPrintf(pHlp, " %sTPR threshold = %#RX32\n", pszPrefix, pVmcs->u32TprThreshold);
3712 pHlp->pfnPrintf(pHlp, " %sPLE gap = %#RX32\n", pszPrefix, pVmcs->u32PleGap);
3713 pHlp->pfnPrintf(pHlp, " %sPLE window = %#RX32\n", pszPrefix, pVmcs->u32PleWindow);
3714
3715 /* 64-bit. */
3716 pHlp->pfnPrintf(pHlp, " %sIO-bitmap A addr = %#RX64\n", pszPrefix, pVmcs->u64AddrIoBitmapA.u);
3717 pHlp->pfnPrintf(pHlp, " %sIO-bitmap B addr = %#RX64\n", pszPrefix, pVmcs->u64AddrIoBitmapB.u);
3718 pHlp->pfnPrintf(pHlp, " %sMSR-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrMsrBitmap.u);
3719 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR store addr = %#RX64\n", pszPrefix, pVmcs->u64AddrExitMsrStore.u);
3720 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR load addr = %#RX64\n", pszPrefix, pVmcs->u64AddrExitMsrLoad.u);
3721 pHlp->pfnPrintf(pHlp, " %sVM-entry MSR load addr = %#RX64\n", pszPrefix, pVmcs->u64AddrEntryMsrLoad.u);
3722 pHlp->pfnPrintf(pHlp, " %sExecutive VMCS ptr = %#RX64\n", pszPrefix, pVmcs->u64ExecVmcsPtr.u);
3723 pHlp->pfnPrintf(pHlp, " %sPML addr = %#RX64\n", pszPrefix, pVmcs->u64AddrPml.u);
3724 pHlp->pfnPrintf(pHlp, " %sTSC offset = %#RX64\n", pszPrefix, pVmcs->u64TscOffset.u);
3725 pHlp->pfnPrintf(pHlp, " %sVirtual-APIC addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVirtApic.u);
3726 pHlp->pfnPrintf(pHlp, " %sAPIC-access addr = %#RX64\n", pszPrefix, pVmcs->u64AddrApicAccess.u);
3727 pHlp->pfnPrintf(pHlp, " %sPosted-intr desc addr = %#RX64\n", pszPrefix, pVmcs->u64AddrPostedIntDesc.u);
3728 pHlp->pfnPrintf(pHlp, " %sVM-functions control = %#RX64\n", pszPrefix, pVmcs->u64VmFuncCtls.u);
3729 pHlp->pfnPrintf(pHlp, " %sEPTP ptr = %#RX64\n", pszPrefix, pVmcs->u64EptPtr.u);
3730 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 0 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap0.u);
3731 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 1 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap1.u);
3732 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 2 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap2.u);
3733 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 3 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap3.u);
3734 pHlp->pfnPrintf(pHlp, " %sEPTP-list addr = %#RX64\n", pszPrefix, pVmcs->u64AddrEptpList.u);
3735 pHlp->pfnPrintf(pHlp, " %sVMREAD-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVmreadBitmap.u);
3736 pHlp->pfnPrintf(pHlp, " %sVMWRITE-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVmwriteBitmap.u);
3737 pHlp->pfnPrintf(pHlp, " %sVirt-Xcpt info addr = %#RX64\n", pszPrefix, pVmcs->u64AddrXcptVeInfo.u);
3738 pHlp->pfnPrintf(pHlp, " %sXSS-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64XssExitBitmap.u);
3739 pHlp->pfnPrintf(pHlp, " %sENCLS-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64EnclsExitBitmap.u);
3740 pHlp->pfnPrintf(pHlp, " %sSPP-table ptr = %#RX64\n", pszPrefix, pVmcs->u64SppTablePtr.u);
3741 pHlp->pfnPrintf(pHlp, " %sTSC multiplier = %#RX64\n", pszPrefix, pVmcs->u64TscMultiplier.u);
3742 pHlp->pfnPrintf(pHlp, " %sTertiary processor ctls = %#RX64\n", pszPrefix, pVmcs->u64ProcCtls3.u);
3743 pHlp->pfnPrintf(pHlp, " %sENCLV-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64EnclvExitBitmap.u);
3744
3745 /* Natural width. */
3746 pHlp->pfnPrintf(pHlp, " %sCR0 guest/host mask = %#RX64\n", pszPrefix, pVmcs->u64Cr0Mask.u);
3747 pHlp->pfnPrintf(pHlp, " %sCR4 guest/host mask = %#RX64\n", pszPrefix, pVmcs->u64Cr4Mask.u);
3748 pHlp->pfnPrintf(pHlp, " %sCR0 read shadow = %#RX64\n", pszPrefix, pVmcs->u64Cr0ReadShadow.u);
3749 pHlp->pfnPrintf(pHlp, " %sCR4 read shadow = %#RX64\n", pszPrefix, pVmcs->u64Cr4ReadShadow.u);
3750 pHlp->pfnPrintf(pHlp, " %sCR3-target 0 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target0.u);
3751 pHlp->pfnPrintf(pHlp, " %sCR3-target 1 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target1.u);
3752 pHlp->pfnPrintf(pHlp, " %sCR3-target 2 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target2.u);
3753 pHlp->pfnPrintf(pHlp, " %sCR3-target 3 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target3.u);
3754 }
3755
3756 /* Guest state. */
3757 {
3758 char szEFlags[80];
3759 cpumR3InfoFormatFlags(&szEFlags[0], pVmcs->u64GuestRFlags.u);
3760 pHlp->pfnPrintf(pHlp, "%sGuest state:\n", pszPrefix);
3761
3762 /* 16-bit. */
3763 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Cs, "CS", pszPrefix);
3764 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ss, "SS", pszPrefix);
3765 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Es, "ES", pszPrefix);
3766 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ds, "DS", pszPrefix);
3767 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Fs, "FS", pszPrefix);
3768 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Gs, "GS", pszPrefix);
3769 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ldtr, "LDTR", pszPrefix);
3770 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Tr, "TR", pszPrefix);
3771 CPUMVMX_DUMP_GUEST_XDTR(pHlp, pVmcs, Gdtr, "GDTR", pszPrefix);
3772 CPUMVMX_DUMP_GUEST_XDTR(pHlp, pVmcs, Idtr, "IDTR", pszPrefix);
3773 pHlp->pfnPrintf(pHlp, " %sInterrupt status = %#RX16\n", pszPrefix, pVmcs->u16GuestIntStatus);
3774 pHlp->pfnPrintf(pHlp, " %sPML index = %#RX16\n", pszPrefix, pVmcs->u16PmlIndex);
3775
3776 /* 32-bit. */
3777 pHlp->pfnPrintf(pHlp, " %sInterruptibility state = %#RX32\n", pszPrefix, pVmcs->u32GuestIntrState);
3778 pHlp->pfnPrintf(pHlp, " %sActivity state = %#RX32\n", pszPrefix, pVmcs->u32GuestActivityState);
3779 pHlp->pfnPrintf(pHlp, " %sSMBASE = %#RX32\n", pszPrefix, pVmcs->u32GuestSmBase);
3780 pHlp->pfnPrintf(pHlp, " %sSysEnter CS = %#RX32\n", pszPrefix, pVmcs->u32GuestSysenterCS);
3781 pHlp->pfnPrintf(pHlp, " %sVMX-preemption timer value = %#RX32\n", pszPrefix, pVmcs->u32PreemptTimer);
3782
3783 /* 64-bit. */
3784 pHlp->pfnPrintf(pHlp, " %sVMCS link ptr = %#RX64\n", pszPrefix, pVmcs->u64VmcsLinkPtr.u);
3785 pHlp->pfnPrintf(pHlp, " %sDBGCTL = %#RX64\n", pszPrefix, pVmcs->u64GuestDebugCtlMsr.u);
3786 pHlp->pfnPrintf(pHlp, " %sPAT = %#RX64\n", pszPrefix, pVmcs->u64GuestPatMsr.u);
3787 pHlp->pfnPrintf(pHlp, " %sEFER = %#RX64\n", pszPrefix, pVmcs->u64GuestEferMsr.u);
3788 pHlp->pfnPrintf(pHlp, " %sPERFGLOBALCTRL = %#RX64\n", pszPrefix, pVmcs->u64GuestPerfGlobalCtlMsr.u);
3789 pHlp->pfnPrintf(pHlp, " %sPDPTE 0 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte0.u);
3790 pHlp->pfnPrintf(pHlp, " %sPDPTE 1 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte1.u);
3791 pHlp->pfnPrintf(pHlp, " %sPDPTE 2 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte2.u);
3792 pHlp->pfnPrintf(pHlp, " %sPDPTE 3 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte3.u);
3793 pHlp->pfnPrintf(pHlp, " %sBNDCFGS = %#RX64\n", pszPrefix, pVmcs->u64GuestBndcfgsMsr.u);
3794 pHlp->pfnPrintf(pHlp, " %sRTIT_CTL = %#RX64\n", pszPrefix, pVmcs->u64GuestRtitCtlMsr.u);
3795 pHlp->pfnPrintf(pHlp, " %sPKRS = %#RX64\n", pszPrefix, pVmcs->u64GuestPkrsMsr.u);
3796
3797 /* Natural width. */
3798 pHlp->pfnPrintf(pHlp, " %sCR0 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr0.u);
3799 pHlp->pfnPrintf(pHlp, " %sCR3 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr3.u);
3800 pHlp->pfnPrintf(pHlp, " %sCR4 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr4.u);
3801 pHlp->pfnPrintf(pHlp, " %sDR7 = %#RX64\n", pszPrefix, pVmcs->u64GuestDr7.u);
3802 pHlp->pfnPrintf(pHlp, " %sRSP = %#RX64\n", pszPrefix, pVmcs->u64GuestRsp.u);
3803 pHlp->pfnPrintf(pHlp, " %sRIP = %#RX64\n", pszPrefix, pVmcs->u64GuestRip.u);
3804 pHlp->pfnPrintf(pHlp, " %sRFLAGS = %#RX64 %31s\n",pszPrefix, pVmcs->u64GuestRFlags.u, szEFlags);
3805 pHlp->pfnPrintf(pHlp, " %sPending debug xcpts = %#RX64\n", pszPrefix, pVmcs->u64GuestPendingDbgXcpts.u);
3806 pHlp->pfnPrintf(pHlp, " %sSysEnter ESP = %#RX64\n", pszPrefix, pVmcs->u64GuestSysenterEsp.u);
3807 pHlp->pfnPrintf(pHlp, " %sSysEnter EIP = %#RX64\n", pszPrefix, pVmcs->u64GuestSysenterEip.u);
3808 pHlp->pfnPrintf(pHlp, " %sS_CET = %#RX64\n", pszPrefix, pVmcs->u64GuestSCetMsr.u);
3809 pHlp->pfnPrintf(pHlp, " %sSSP = %#RX64\n", pszPrefix, pVmcs->u64GuestSsp.u);
3810 pHlp->pfnPrintf(pHlp, " %sINTERRUPT_SSP_TABLE_ADDR = %#RX64\n", pszPrefix, pVmcs->u64GuestIntrSspTableAddrMsr.u);
3811 }
3812
3813 /* Host state. */
3814 {
3815 pHlp->pfnPrintf(pHlp, "%sHost state:\n", pszPrefix);
3816
3817 /* 16-bit. */
3818 pHlp->pfnPrintf(pHlp, " %sCS = %#RX16\n", pszPrefix, pVmcs->HostCs);
3819 pHlp->pfnPrintf(pHlp, " %sSS = %#RX16\n", pszPrefix, pVmcs->HostSs);
3820 pHlp->pfnPrintf(pHlp, " %sDS = %#RX16\n", pszPrefix, pVmcs->HostDs);
3821 pHlp->pfnPrintf(pHlp, " %sES = %#RX16\n", pszPrefix, pVmcs->HostEs);
3822 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Fs, "FS", pszPrefix);
3823 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Gs, "GS", pszPrefix);
3824 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Tr, "TR", pszPrefix);
3825 CPUMVMX_DUMP_HOST_XDTR(pHlp, pVmcs, Gdtr, "GDTR", pszPrefix);
3826 CPUMVMX_DUMP_HOST_XDTR(pHlp, pVmcs, Idtr, "IDTR", pszPrefix);
3827
3828 /* 32-bit. */
3829 pHlp->pfnPrintf(pHlp, " %sSysEnter CS = %#RX32\n", pszPrefix, pVmcs->u32HostSysenterCs);
3830
3831 /* 64-bit. */
3832 pHlp->pfnPrintf(pHlp, " %sEFER = %#RX64\n", pszPrefix, pVmcs->u64HostEferMsr.u);
3833 pHlp->pfnPrintf(pHlp, " %sPAT = %#RX64\n", pszPrefix, pVmcs->u64HostPatMsr.u);
3834 pHlp->pfnPrintf(pHlp, " %sPERFGLOBALCTRL = %#RX64\n", pszPrefix, pVmcs->u64HostPerfGlobalCtlMsr.u);
3835 pHlp->pfnPrintf(pHlp, " %sPKRS = %#RX64\n", pszPrefix, pVmcs->u64HostPkrsMsr.u);
3836
3837 /* Natural width. */
3838 pHlp->pfnPrintf(pHlp, " %sCR0 = %#RX64\n", pszPrefix, pVmcs->u64HostCr0.u);
3839 pHlp->pfnPrintf(pHlp, " %sCR3 = %#RX64\n", pszPrefix, pVmcs->u64HostCr3.u);
3840 pHlp->pfnPrintf(pHlp, " %sCR4 = %#RX64\n", pszPrefix, pVmcs->u64HostCr4.u);
3841 pHlp->pfnPrintf(pHlp, " %sSysEnter ESP = %#RX64\n", pszPrefix, pVmcs->u64HostSysenterEsp.u);
3842 pHlp->pfnPrintf(pHlp, " %sSysEnter EIP = %#RX64\n", pszPrefix, pVmcs->u64HostSysenterEip.u);
3843 pHlp->pfnPrintf(pHlp, " %sRSP = %#RX64\n", pszPrefix, pVmcs->u64HostRsp.u);
3844 pHlp->pfnPrintf(pHlp, " %sRIP = %#RX64\n", pszPrefix, pVmcs->u64HostRip.u);
3845 pHlp->pfnPrintf(pHlp, " %sS_CET = %#RX64\n", pszPrefix, pVmcs->u64HostSCetMsr.u);
3846 pHlp->pfnPrintf(pHlp, " %sSSP = %#RX64\n", pszPrefix, pVmcs->u64HostSsp.u);
3847 pHlp->pfnPrintf(pHlp, " %sINTERRUPT_SSP_TABLE_ADDR = %#RX64\n", pszPrefix, pVmcs->u64HostIntrSspTableAddrMsr.u);
3848
3849 }
3850
3851 /* Read-only fields. */
3852 {
3853 pHlp->pfnPrintf(pHlp, "%sRead-only data fields:\n", pszPrefix);
3854
3855 /* 16-bit (none currently). */
3856
3857 /* 32-bit. */
3858 pHlp->pfnPrintf(pHlp, " %sExit reason = %u (%s)\n", pszPrefix, pVmcs->u32RoExitReason, HMGetVmxExitName(pVmcs->u32RoExitReason));
3859 pHlp->pfnPrintf(pHlp, " %sExit qualification = %#RX64\n", pszPrefix, pVmcs->u64RoExitQual.u);
3860 pHlp->pfnPrintf(pHlp, " %sVM-instruction error = %#RX32\n", pszPrefix, pVmcs->u32RoVmInstrError);
3861 pHlp->pfnPrintf(pHlp, " %sVM-exit intr info = %#RX32\n", pszPrefix, pVmcs->u32RoExitIntInfo);
3862 {
3863 uint32_t const fInfo = pVmcs->u32RoExitIntInfo;
3864 uint8_t const uType = VMX_EXIT_INT_INFO_TYPE(fInfo);
3865 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_VALID(fInfo));
3866 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetExitIntInfoTypeDesc(uType));
3867 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_EXIT_INT_INFO_VECTOR(fInfo));
3868 pHlp->pfnPrintf(pHlp, " %sNMI-unblocking-IRET = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_NMI_UNBLOCK_IRET(fInfo));
3869 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_ERROR_CODE_VALID(fInfo));
3870 }
3871 pHlp->pfnPrintf(pHlp, " %sVM-exit intr error-code = %#RX32\n", pszPrefix, pVmcs->u32RoExitIntErrCode);
3872 pHlp->pfnPrintf(pHlp, " %sIDT-vectoring info = %#RX32\n", pszPrefix, pVmcs->u32RoIdtVectoringInfo);
3873 {
3874 uint32_t const fInfo = pVmcs->u32RoIdtVectoringInfo;
3875 uint8_t const uType = VMX_IDT_VECTORING_INFO_TYPE(fInfo);
3876 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_IDT_VECTORING_INFO_IS_VALID(fInfo));
3877 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetIdtVectoringInfoTypeDesc(uType));
3878 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_IDT_VECTORING_INFO_VECTOR(fInfo));
3879 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_IDT_VECTORING_INFO_IS_ERROR_CODE_VALID(fInfo));
3880 }
3881 pHlp->pfnPrintf(pHlp, " %sIDT-vectoring error-code = %#RX32\n", pszPrefix, pVmcs->u32RoIdtVectoringErrCode);
3882 pHlp->pfnPrintf(pHlp, " %sVM-exit instruction length = %u byte(s)\n", pszPrefix, pVmcs->u32RoExitInstrLen);
3883 pHlp->pfnPrintf(pHlp, " %sVM-exit instruction info = %#RX64\n", pszPrefix, pVmcs->u32RoExitInstrInfo);
3884
3885 /* 64-bit. */
3886 pHlp->pfnPrintf(pHlp, " %sGuest-physical addr = %#RX64\n", pszPrefix, pVmcs->u64RoGuestPhysAddr.u);
3887
3888 /* Natural width. */
3889 pHlp->pfnPrintf(pHlp, " %sI/O RCX = %#RX64\n", pszPrefix, pVmcs->u64RoIoRcx.u);
3890 pHlp->pfnPrintf(pHlp, " %sI/O RSI = %#RX64\n", pszPrefix, pVmcs->u64RoIoRsi.u);
3891 pHlp->pfnPrintf(pHlp, " %sI/O RDI = %#RX64\n", pszPrefix, pVmcs->u64RoIoRdi.u);
3892 pHlp->pfnPrintf(pHlp, " %sI/O RIP = %#RX64\n", pszPrefix, pVmcs->u64RoIoRip.u);
3893 pHlp->pfnPrintf(pHlp, " %sGuest-linear addr = %#RX64\n", pszPrefix, pVmcs->u64RoGuestLinearAddr.u);
3894 }
3895
3896#ifdef DEBUG_ramshankar
3897 if (pVmcs->u32ProcCtls & VMX_PROC_CTLS_USE_TPR_SHADOW)
3898 {
3899 void *pvPage = RTMemTmpAllocZ(VMX_V_VIRT_APIC_SIZE);
3900 Assert(pvPage);
3901 RTGCPHYS const GCPhysVirtApic = pVmcs->u64AddrVirtApic.u;
3902 int rc = PGMPhysSimpleReadGCPhys(pVCpu->CTX_SUFF(pVM), pvPage, GCPhysVirtApic, VMX_V_VIRT_APIC_SIZE);
3903 if (RT_SUCCESS(rc))
3904 {
3905 pHlp->pfnPrintf(pHlp, " %sVirtual-APIC page\n", pszPrefix);
3906 pHlp->pfnPrintf(pHlp, "%.*Rhxs\n", VMX_V_VIRT_APIC_SIZE, pvPage);
3907 pHlp->pfnPrintf(pHlp, "\n");
3908 }
3909 RTMemTmpFree(pvPage);
3910 }
3911#else
3912 NOREF(pVCpu);
3913#endif
3914
3915#undef CPUMVMX_DUMP_HOST_XDTR
3916#undef CPUMVMX_DUMP_HOST_FS_GS_TR
3917#undef CPUMVMX_DUMP_GUEST_SEGREG
3918#undef CPUMVMX_DUMP_GUEST_XDTR
3919}
3920
3921
3922/**
3923 * Display the guest's hardware-virtualization cpu state.
3924 *
3925 * @param pVM The cross context VM structure.
3926 * @param pHlp The info helper functions.
3927 * @param pszArgs Arguments, ignored.
3928 */
3929static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3930{
3931 RT_NOREF(pszArgs);
3932
3933 PVMCPU pVCpu = VMMGetCpu(pVM);
3934 if (!pVCpu)
3935 pVCpu = pVM->apCpusR3[0];
3936
3937 PCCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
3938 bool const fSvm = pVM->cpum.s.GuestFeatures.fSvm;
3939 bool const fVmx = pVM->cpum.s.GuestFeatures.fVmx;
3940
3941 pHlp->pfnPrintf(pHlp, "VCPU[%u] hardware virtualization state:\n", pVCpu->idCpu);
3942 pHlp->pfnPrintf(pHlp, "fLocalForcedActions = %#RX32\n", pCtx->hwvirt.fLocalForcedActions);
3943 pHlp->pfnPrintf(pHlp, "In nested-guest hwvirt mode = %RTbool\n", CPUMIsGuestInNestedHwvirtMode(pCtx));
3944
3945 if (fSvm)
3946 {
3947 pHlp->pfnPrintf(pHlp, "SVM hwvirt state:\n");
3948 pHlp->pfnPrintf(pHlp, " fGif = %RTbool\n", pCtx->hwvirt.fGif);
3949
3950 char szEFlags[80];
3951 cpumR3InfoFormatFlags(&szEFlags[0], pCtx->hwvirt.svm.HostState.rflags.u);
3952 pHlp->pfnPrintf(pHlp, " uMsrHSavePa = %#RX64\n", pCtx->hwvirt.svm.uMsrHSavePa);
3953 pHlp->pfnPrintf(pHlp, " GCPhysVmcb = %#RGp\n", pCtx->hwvirt.svm.GCPhysVmcb);
3954 pHlp->pfnPrintf(pHlp, " VmcbCtrl:\n");
3955 cpumR3InfoSvmVmcbCtrl(pHlp, &pCtx->hwvirt.svm.Vmcb.ctrl, " " /* pszPrefix */);
3956 pHlp->pfnPrintf(pHlp, " VmcbStateSave:\n");
3957 cpumR3InfoSvmVmcbStateSave(pHlp, &pCtx->hwvirt.svm.Vmcb.guest, " " /* pszPrefix */);
3958 pHlp->pfnPrintf(pHlp, " HostState:\n");
3959 pHlp->pfnPrintf(pHlp, " uEferMsr = %#RX64\n", pCtx->hwvirt.svm.HostState.uEferMsr);
3960 pHlp->pfnPrintf(pHlp, " uCr0 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr0);
3961 pHlp->pfnPrintf(pHlp, " uCr4 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr4);
3962 pHlp->pfnPrintf(pHlp, " uCr3 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr3);
3963 pHlp->pfnPrintf(pHlp, " uRip = %#RX64\n", pCtx->hwvirt.svm.HostState.uRip);
3964 pHlp->pfnPrintf(pHlp, " uRsp = %#RX64\n", pCtx->hwvirt.svm.HostState.uRsp);
3965 pHlp->pfnPrintf(pHlp, " uRax = %#RX64\n", pCtx->hwvirt.svm.HostState.uRax);
3966 pHlp->pfnPrintf(pHlp, " rflags = %#RX64 %31s\n", pCtx->hwvirt.svm.HostState.rflags.u64, szEFlags);
3967 PCCPUMSELREG pSelEs = &pCtx->hwvirt.svm.HostState.es;
3968 pHlp->pfnPrintf(pHlp, " es = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3969 pSelEs->Sel, pSelEs->u64Base, pSelEs->u32Limit, pSelEs->Attr.u);
3970 PCCPUMSELREG pSelCs = &pCtx->hwvirt.svm.HostState.cs;
3971 pHlp->pfnPrintf(pHlp, " cs = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3972 pSelCs->Sel, pSelCs->u64Base, pSelCs->u32Limit, pSelCs->Attr.u);
3973 PCCPUMSELREG pSelSs = &pCtx->hwvirt.svm.HostState.ss;
3974 pHlp->pfnPrintf(pHlp, " ss = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3975 pSelSs->Sel, pSelSs->u64Base, pSelSs->u32Limit, pSelSs->Attr.u);
3976 PCCPUMSELREG pSelDs = &pCtx->hwvirt.svm.HostState.ds;
3977 pHlp->pfnPrintf(pHlp, " ds = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3978 pSelDs->Sel, pSelDs->u64Base, pSelDs->u32Limit, pSelDs->Attr.u);
3979 pHlp->pfnPrintf(pHlp, " gdtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.gdtr.pGdt,
3980 pCtx->hwvirt.svm.HostState.gdtr.cbGdt);
3981 pHlp->pfnPrintf(pHlp, " idtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.idtr.pIdt,
3982 pCtx->hwvirt.svm.HostState.idtr.cbIdt);
3983 pHlp->pfnPrintf(pHlp, " cPauseFilter = %RU16\n", pCtx->hwvirt.svm.cPauseFilter);
3984 pHlp->pfnPrintf(pHlp, " cPauseFilterThreshold = %RU32\n", pCtx->hwvirt.svm.cPauseFilterThreshold);
3985 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %u\n", pCtx->hwvirt.svm.fInterceptEvents);
3986 }
3987 else if (fVmx)
3988 {
3989 pHlp->pfnPrintf(pHlp, "VMX hwvirt state:\n");
3990 pHlp->pfnPrintf(pHlp, " GCPhysVmxon = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmxon);
3991 pHlp->pfnPrintf(pHlp, " GCPhysVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmcs);
3992 pHlp->pfnPrintf(pHlp, " GCPhysShadowVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysShadowVmcs);
3993 pHlp->pfnPrintf(pHlp, " enmDiag = %u (%s)\n", pCtx->hwvirt.vmx.enmDiag, HMGetVmxDiagDesc(pCtx->hwvirt.vmx.enmDiag));
3994 pHlp->pfnPrintf(pHlp, " uDiagAux = %#RX64\n", pCtx->hwvirt.vmx.uDiagAux);
3995 pHlp->pfnPrintf(pHlp, " enmAbort = %u (%s)\n", pCtx->hwvirt.vmx.enmAbort, VMXGetAbortDesc(pCtx->hwvirt.vmx.enmAbort));
3996 pHlp->pfnPrintf(pHlp, " uAbortAux = %u (%#x)\n", pCtx->hwvirt.vmx.uAbortAux, pCtx->hwvirt.vmx.uAbortAux);
3997 pHlp->pfnPrintf(pHlp, " fInVmxRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxRootMode);
3998 pHlp->pfnPrintf(pHlp, " fInVmxNonRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxNonRootMode);
3999 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %RTbool\n", pCtx->hwvirt.vmx.fInterceptEvents);
4000 pHlp->pfnPrintf(pHlp, " fNmiUnblockingIret = %RTbool\n", pCtx->hwvirt.vmx.fNmiUnblockingIret);
4001 pHlp->pfnPrintf(pHlp, " uFirstPauseLoopTick = %RX64\n", pCtx->hwvirt.vmx.uFirstPauseLoopTick);
4002 pHlp->pfnPrintf(pHlp, " uPrevPauseTick = %RX64\n", pCtx->hwvirt.vmx.uPrevPauseTick);
4003 pHlp->pfnPrintf(pHlp, " uEntryTick = %RX64\n", pCtx->hwvirt.vmx.uEntryTick);
4004 pHlp->pfnPrintf(pHlp, " offVirtApicWrite = %#RX16\n", pCtx->hwvirt.vmx.offVirtApicWrite);
4005 pHlp->pfnPrintf(pHlp, " fVirtNmiBlocking = %RTbool\n", pCtx->hwvirt.vmx.fVirtNmiBlocking);
4006 pHlp->pfnPrintf(pHlp, " VMCS cache:\n");
4007 cpumR3InfoVmxVmcs(pVCpu, pHlp, &pCtx->hwvirt.vmx.Vmcs, " " /* pszPrefix */);
4008 }
4009 else
4010 pHlp->pfnPrintf(pHlp, "Hwvirt state disabled.\n");
4011
4012#undef CPUMHWVIRTDUMP_NONE
4013#undef CPUMHWVIRTDUMP_COMMON
4014#undef CPUMHWVIRTDUMP_SVM
4015#undef CPUMHWVIRTDUMP_VMX
4016#undef CPUMHWVIRTDUMP_LAST
4017#undef CPUMHWVIRTDUMP_ALL
4018}
4019
4020/**
4021 * Display the current guest instruction
4022 *
4023 * @param pVM The cross context VM structure.
4024 * @param pHlp The info helper functions.
4025 * @param pszArgs Arguments, ignored.
4026 */
4027static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4028{
4029 NOREF(pszArgs);
4030
4031 PVMCPU pVCpu = VMMGetCpu(pVM);
4032 if (!pVCpu)
4033 pVCpu = pVM->apCpusR3[0];
4034
4035 char szInstruction[256];
4036 szInstruction[0] = '\0';
4037 DBGFR3DisasInstrCurrent(pVCpu, szInstruction, sizeof(szInstruction));
4038 pHlp->pfnPrintf(pHlp, "\nCPUM%u: %s\n\n", pVCpu->idCpu, szInstruction);
4039}
4040
4041
4042/**
4043 * Display the hypervisor cpu state.
4044 *
4045 * @param pVM The cross context VM structure.
4046 * @param pHlp The info helper functions.
4047 * @param pszArgs Arguments, ignored.
4048 */
4049static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4050{
4051 PVMCPU pVCpu = VMMGetCpu(pVM);
4052 if (!pVCpu)
4053 pVCpu = pVM->apCpusR3[0];
4054
4055 CPUMDUMPTYPE enmType;
4056 const char *pszComment;
4057 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
4058 pHlp->pfnPrintf(pHlp, "Hypervisor CPUM state: %s\n", pszComment);
4059
4060 pHlp->pfnPrintf(pHlp,
4061 ".dr0=%016RX64 .dr1=%016RX64 .dr2=%016RX64 .dr3=%016RX64\n"
4062 ".dr4=%016RX64 .dr5=%016RX64 .dr6=%016RX64 .dr7=%016RX64\n",
4063 pVCpu->cpum.s.Hyper.dr[0], pVCpu->cpum.s.Hyper.dr[1], pVCpu->cpum.s.Hyper.dr[2], pVCpu->cpum.s.Hyper.dr[3],
4064 pVCpu->cpum.s.Hyper.dr[4], pVCpu->cpum.s.Hyper.dr[5], pVCpu->cpum.s.Hyper.dr[6], pVCpu->cpum.s.Hyper.dr[7]);
4065 pHlp->pfnPrintf(pHlp, "CR4OrMask=%#x CR4AndMask=%#x\n", pVM->cpum.s.CR4.OrMask, pVM->cpum.s.CR4.AndMask);
4066}
4067
4068
4069/**
4070 * Display the host cpu state.
4071 *
4072 * @param pVM The cross context VM structure.
4073 * @param pHlp The info helper functions.
4074 * @param pszArgs Arguments, ignored.
4075 */
4076static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4077{
4078 CPUMDUMPTYPE enmType;
4079 const char *pszComment;
4080 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
4081 pHlp->pfnPrintf(pHlp, "Host CPUM state: %s\n", pszComment);
4082
4083 PVMCPU pVCpu = VMMGetCpu(pVM);
4084 if (!pVCpu)
4085 pVCpu = pVM->apCpusR3[0];
4086 PCPUMHOSTCTX pCtx = &pVCpu->cpum.s.Host;
4087
4088 /*
4089 * Format the EFLAGS.
4090 */
4091 uint64_t efl = pCtx->rflags;
4092 char szEFlags[80];
4093 cpumR3InfoFormatFlags(&szEFlags[0], efl);
4094
4095 /*
4096 * Format the registers.
4097 */
4098 pHlp->pfnPrintf(pHlp,
4099 "rax=xxxxxxxxxxxxxxxx rbx=%016RX64 rcx=xxxxxxxxxxxxxxxx\n"
4100 "rdx=xxxxxxxxxxxxxxxx rsi=%016RX64 rdi=%016RX64\n"
4101 "rip=xxxxxxxxxxxxxxxx rsp=%016RX64 rbp=%016RX64\n"
4102 " r8=xxxxxxxxxxxxxxxx r9=xxxxxxxxxxxxxxxx r10=%016RX64\n"
4103 "r11=%016RX64 r12=%016RX64 r13=%016RX64\n"
4104 "r14=%016RX64 r15=%016RX64\n"
4105 "iopl=%d %31s\n"
4106 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08RX64\n"
4107 "cr0=%016RX64 cr2=xxxxxxxxxxxxxxxx cr3=%016RX64\n"
4108 "cr4=%016RX64 ldtr=%04x tr=%04x\n"
4109 "dr[0]=%016RX64 dr[1]=%016RX64 dr[2]=%016RX64\n"
4110 "dr[3]=%016RX64 dr[6]=%016RX64 dr[7]=%016RX64\n"
4111 "gdtr=%016RX64:%04x idtr=%016RX64:%04x\n"
4112 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
4113 "FSbase=%016RX64 GSbase=%016RX64 efer=%08RX64\n"
4114 ,
4115 /*pCtx->rax,*/ pCtx->rbx, /*pCtx->rcx,
4116 pCtx->rdx,*/ pCtx->rsi, pCtx->rdi,
4117 /*pCtx->rip,*/ pCtx->rsp, pCtx->rbp,
4118 /*pCtx->r8, pCtx->r9,*/ pCtx->r10,
4119 pCtx->r11, pCtx->r12, pCtx->r13,
4120 pCtx->r14, pCtx->r15,
4121 X86_EFL_GET_IOPL(efl), szEFlags,
4122 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
4123 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3,
4124 pCtx->cr4, pCtx->ldtr, pCtx->tr,
4125 pCtx->dr0, pCtx->dr1, pCtx->dr2,
4126 pCtx->dr3, pCtx->dr6, pCtx->dr7,
4127 pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->idtr.uAddr, pCtx->idtr.cb,
4128 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp,
4129 pCtx->FSbase, pCtx->GSbase, pCtx->efer);
4130}
4131
4132/**
4133 * Structure used when disassembling and instructions in DBGF.
4134 * This is used so the reader function can get the stuff it needs.
4135 */
4136typedef struct CPUMDISASSTATE
4137{
4138 /** Pointer to the CPU structure. */
4139 PDISCPUSTATE pCpu;
4140 /** Pointer to the VM. */
4141 PVM pVM;
4142 /** Pointer to the VMCPU. */
4143 PVMCPU pVCpu;
4144 /** Pointer to the first byte in the segment. */
4145 RTGCUINTPTR GCPtrSegBase;
4146 /** Pointer to the byte after the end of the segment. (might have wrapped!) */
4147 RTGCUINTPTR GCPtrSegEnd;
4148 /** The size of the segment minus 1. */
4149 RTGCUINTPTR cbSegLimit;
4150 /** Pointer to the current page - R3 Ptr. */
4151 void const *pvPageR3;
4152 /** Pointer to the current page - GC Ptr. */
4153 RTGCPTR pvPageGC;
4154 /** The lock information that PGMPhysReleasePageMappingLock needs. */
4155 PGMPAGEMAPLOCK PageMapLock;
4156 /** Whether the PageMapLock is valid or not. */
4157 bool fLocked;
4158 /** 64 bits mode or not. */
4159 bool f64Bits;
4160} CPUMDISASSTATE, *PCPUMDISASSTATE;
4161
4162
4163/**
4164 * @callback_method_impl{FNDISREADBYTES}
4165 */
4166static DECLCALLBACK(int) cpumR3DisasInstrRead(PDISCPUSTATE pDis, uint8_t offInstr, uint8_t cbMinRead, uint8_t cbMaxRead)
4167{
4168 PCPUMDISASSTATE pState = (PCPUMDISASSTATE)pDis->pvUser;
4169 for (;;)
4170 {
4171 RTGCUINTPTR GCPtr = pDis->uInstrAddr + offInstr + pState->GCPtrSegBase;
4172
4173 /*
4174 * Need to update the page translation?
4175 */
4176 if ( !pState->pvPageR3
4177 || (GCPtr >> GUEST_PAGE_SHIFT) != (pState->pvPageGC >> GUEST_PAGE_SHIFT))
4178 {
4179 /* translate the address */
4180 pState->pvPageGC = GCPtr & PAGE_BASE_GC_MASK;
4181
4182 /* Release mapping lock previously acquired. */
4183 if (pState->fLocked)
4184 PGMPhysReleasePageMappingLock(pState->pVM, &pState->PageMapLock);
4185 int rc = PGMPhysGCPtr2CCPtrReadOnly(pState->pVCpu, pState->pvPageGC, &pState->pvPageR3, &pState->PageMapLock);
4186 if (RT_SUCCESS(rc))
4187 pState->fLocked = true;
4188 else
4189 {
4190 pState->fLocked = false;
4191 pState->pvPageR3 = NULL;
4192 return rc;
4193 }
4194 }
4195
4196 /*
4197 * Check the segment limit.
4198 */
4199 if (!pState->f64Bits && pDis->uInstrAddr + offInstr > pState->cbSegLimit)
4200 return VERR_OUT_OF_SELECTOR_BOUNDS;
4201
4202 /*
4203 * Calc how much we can read.
4204 */
4205 uint32_t cb = GUEST_PAGE_SIZE - (GCPtr & GUEST_PAGE_OFFSET_MASK);
4206 if (!pState->f64Bits)
4207 {
4208 RTGCUINTPTR cbSeg = pState->GCPtrSegEnd - GCPtr;
4209 if (cb > cbSeg && cbSeg)
4210 cb = cbSeg;
4211 }
4212 if (cb > cbMaxRead)
4213 cb = cbMaxRead;
4214
4215 /*
4216 * Read and advance or exit.
4217 */
4218 memcpy(&pDis->abInstr[offInstr], (uint8_t *)pState->pvPageR3 + (GCPtr & GUEST_PAGE_OFFSET_MASK), cb);
4219 offInstr += (uint8_t)cb;
4220 if (cb >= cbMinRead)
4221 {
4222 pDis->cbCachedInstr = offInstr;
4223 return VINF_SUCCESS;
4224 }
4225 cbMinRead -= (uint8_t)cb;
4226 cbMaxRead -= (uint8_t)cb;
4227 }
4228}
4229
4230
4231/**
4232 * Disassemble an instruction and return the information in the provided structure.
4233 *
4234 * @returns VBox status code.
4235 * @param pVM The cross context VM structure.
4236 * @param pVCpu The cross context virtual CPU structure.
4237 * @param pCtx Pointer to the guest CPU context.
4238 * @param GCPtrPC Program counter (relative to CS) to disassemble from.
4239 * @param pCpu Disassembly state.
4240 * @param pszPrefix String prefix for logging (debug only).
4241 *
4242 */
4243VMMR3DECL(int) CPUMR3DisasmInstrCPU(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx, RTGCPTR GCPtrPC, PDISCPUSTATE pCpu,
4244 const char *pszPrefix)
4245{
4246 CPUMDISASSTATE State;
4247 int rc;
4248
4249 const PGMMODE enmMode = PGMGetGuestMode(pVCpu);
4250 State.pCpu = pCpu;
4251 State.pvPageGC = 0;
4252 State.pvPageR3 = NULL;
4253 State.pVM = pVM;
4254 State.pVCpu = pVCpu;
4255 State.fLocked = false;
4256 State.f64Bits = false;
4257
4258 /*
4259 * Get selector information.
4260 */
4261 DISCPUMODE enmDisCpuMode;
4262 if ( (pCtx->cr0 & X86_CR0_PE)
4263 && pCtx->eflags.Bits.u1VM == 0)
4264 {
4265 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
4266 return VERR_CPUM_HIDDEN_CS_LOAD_ERROR;
4267 State.f64Bits = enmMode >= PGMMODE_AMD64 && pCtx->cs.Attr.n.u1Long;
4268 State.GCPtrSegBase = pCtx->cs.u64Base;
4269 State.GCPtrSegEnd = pCtx->cs.u32Limit + 1 + (RTGCUINTPTR)pCtx->cs.u64Base;
4270 State.cbSegLimit = pCtx->cs.u32Limit;
4271 enmDisCpuMode = (State.f64Bits)
4272 ? DISCPUMODE_64BIT
4273 : pCtx->cs.Attr.n.u1DefBig
4274 ? DISCPUMODE_32BIT
4275 : DISCPUMODE_16BIT;
4276 }
4277 else
4278 {
4279 /* real or V86 mode */
4280 enmDisCpuMode = DISCPUMODE_16BIT;
4281 State.GCPtrSegBase = pCtx->cs.Sel * 16;
4282 State.GCPtrSegEnd = 0xFFFFFFFF;
4283 State.cbSegLimit = 0xFFFFFFFF;
4284 }
4285
4286 /*
4287 * Disassemble the instruction.
4288 */
4289 uint32_t cbInstr;
4290#ifndef LOG_ENABLED
4291 RT_NOREF_PV(pszPrefix);
4292 rc = DISInstrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State, pCpu, &cbInstr);
4293 if (RT_SUCCESS(rc))
4294 {
4295#else
4296 char szOutput[160];
4297 rc = DISInstrToStrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State,
4298 pCpu, &cbInstr, szOutput, sizeof(szOutput));
4299 if (RT_SUCCESS(rc))
4300 {
4301 /* log it */
4302 if (pszPrefix)
4303 Log(("%s-CPU%d: %s", pszPrefix, pVCpu->idCpu, szOutput));
4304 else
4305 Log(("%s", szOutput));
4306#endif
4307 rc = VINF_SUCCESS;
4308 }
4309 else
4310 Log(("CPUMR3DisasmInstrCPU: DISInstr failed for %04X:%RGv rc=%Rrc\n", pCtx->cs.Sel, GCPtrPC, rc));
4311
4312 /* Release mapping lock acquired in cpumR3DisasInstrRead. */
4313 if (State.fLocked)
4314 PGMPhysReleasePageMappingLock(pVM, &State.PageMapLock);
4315
4316 return rc;
4317}
4318
4319
4320
4321/**
4322 * API for controlling a few of the CPU features found in CR4.
4323 *
4324 * Currently only X86_CR4_TSD is accepted as input.
4325 *
4326 * @returns VBox status code.
4327 *
4328 * @param pVM The cross context VM structure.
4329 * @param fOr The CR4 OR mask.
4330 * @param fAnd The CR4 AND mask.
4331 */
4332VMMR3DECL(int) CPUMR3SetCR4Feature(PVM pVM, RTHCUINTREG fOr, RTHCUINTREG fAnd)
4333{
4334 AssertMsgReturn(!(fOr & ~(X86_CR4_TSD)), ("%#x\n", fOr), VERR_INVALID_PARAMETER);
4335 AssertMsgReturn((fAnd & ~(X86_CR4_TSD)) == ~(X86_CR4_TSD), ("%#x\n", fAnd), VERR_INVALID_PARAMETER);
4336
4337 pVM->cpum.s.CR4.OrMask &= fAnd;
4338 pVM->cpum.s.CR4.OrMask |= fOr;
4339
4340 return VINF_SUCCESS;
4341}
4342
4343
4344/**
4345 * Called when the ring-3 init phase completes.
4346 *
4347 * @returns VBox status code.
4348 * @param pVM The cross context VM structure.
4349 * @param enmWhat Which init phase.
4350 */
4351VMMR3DECL(int) CPUMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
4352{
4353 switch (enmWhat)
4354 {
4355 case VMINITCOMPLETED_RING3:
4356 {
4357 /*
4358 * Figure out if the guest uses 32-bit or 64-bit FPU state at runtime for 64-bit capable VMs.
4359 * Only applicable/used on 64-bit hosts, refer CPUMR0A.asm. See @bugref{7138}.
4360 */
4361 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
4362 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
4363 {
4364 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
4365
4366 /* While loading a saved-state we fix it up in, cpumR3LoadDone(). */
4367 if (fSupportsLongMode)
4368 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
4369 }
4370
4371 /* Register statistic counters for MSRs. */
4372 cpumR3MsrRegStats(pVM);
4373
4374 /* Create VMX-preemption timer for nested guests if required. Must be
4375 done here as CPUM is initialized before TM. */
4376 if (pVM->cpum.s.GuestFeatures.fVmx)
4377 {
4378 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
4379 {
4380 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
4381 char szName[32];
4382 RTStrPrintf(szName, sizeof(szName), "Nested VMX-preemption %u", idCpu);
4383 int rc = TMR3TimerCreate(pVM, TMCLOCK_VIRTUAL_SYNC, cpumR3VmxPreemptTimerCallback, pVCpu,
4384 TMTIMER_FLAGS_RING0, szName, &pVCpu->cpum.s.hNestedVmxPreemptTimer);
4385 AssertLogRelRCReturn(rc, rc);
4386 }
4387 }
4388 break;
4389 }
4390
4391 default:
4392 break;
4393 }
4394 return VINF_SUCCESS;
4395}
4396
4397
4398/**
4399 * Called when the ring-0 init phases completed.
4400 *
4401 * @param pVM The cross context VM structure.
4402 */
4403VMMR3DECL(void) CPUMR3LogCpuIdAndMsrFeatures(PVM pVM)
4404{
4405 /*
4406 * Enable log buffering as we're going to log a lot of lines.
4407 */
4408 bool const fOldBuffered = RTLogRelSetBuffering(true /*fBuffered*/);
4409
4410 /*
4411 * Log the cpuid.
4412 */
4413 RTCPUSET OnlineSet;
4414 LogRel(("CPUM: Logical host processors: %u present, %u max, %u online, online mask: %016RX64\n",
4415 (unsigned)RTMpGetPresentCount(), (unsigned)RTMpGetCount(), (unsigned)RTMpGetOnlineCount(),
4416 RTCpuSetToU64(RTMpGetOnlineSet(&OnlineSet)) ));
4417 RTCPUID cCores = RTMpGetCoreCount();
4418 if (cCores)
4419 LogRel(("CPUM: Physical host cores: %u\n", (unsigned)cCores));
4420 LogRel(("************************* CPUID dump ************************\n"));
4421 DBGFR3Info(pVM->pUVM, "cpuid", "verbose", DBGFR3InfoLogRelHlp());
4422 LogRel(("\n"));
4423 DBGFR3_INFO_LOG_SAFE(pVM, "cpuid", "verbose"); /* macro */
4424 LogRel(("******************** End of CPUID dump **********************\n"));
4425
4426 /*
4427 * Log VT-x extended features.
4428 *
4429 * SVM features are currently all covered under CPUID so there is nothing
4430 * to do here for SVM.
4431 */
4432 if (pVM->cpum.s.HostFeatures.fVmx)
4433 {
4434 LogRel(("*********************** VT-x features ***********************\n"));
4435 DBGFR3Info(pVM->pUVM, "cpumvmxfeat", "default", DBGFR3InfoLogRelHlp());
4436 LogRel(("\n"));
4437 LogRel(("******************* End of VT-x features ********************\n"));
4438 }
4439
4440 /*
4441 * Restore the log buffering state to what it was previously.
4442 */
4443 RTLogRelSetBuffering(fOldBuffered);
4444}
4445
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette