VirtualBox

source: vbox/trunk/src/VBox/VMM/PGM.cpp@ 12559

Last change on this file since 12559 was 12417, checked in by vboxsync, 16 years ago

Flushing the pgm pool cache when switching guest modes does not apply to nested paging.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 204.3 KB
Line 
1/* $Id: PGM.cpp 12417 2008-09-12 11:45:24Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2007 Sun Microsystems, Inc.
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 *
17 * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
18 * Clara, CA 95054 USA or visit http://www.sun.com if you need
19 * additional information or have any questions.
20 */
21
22
23/** @page pg_pgm PGM - The Page Manager and Monitor
24 *
25 *
26 *
27 * @section sec_pgm_modes Paging Modes
28 *
29 * There are three memory contexts: Host Context (HC), Guest Context (GC)
30 * and intermediate context. When talking about paging HC can also be refered to
31 * as "host paging", and GC refered to as "shadow paging".
32 *
33 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
34 * is defined by the host operating system. The mode used in the shadow paging mode
35 * depends on the host paging mode and what the mode the guest is currently in. The
36 * following relation between the two is defined:
37 *
38 * @verbatim
39 Host > 32-bit | PAE | AMD64 |
40 Guest | | | |
41 ==v================================
42 32-bit 32-bit PAE PAE
43 -------|--------|--------|--------|
44 PAE PAE PAE PAE
45 -------|--------|--------|--------|
46 AMD64 AMD64 AMD64 AMD64
47 -------|--------|--------|--------| @endverbatim
48 *
49 * All configuration except those in the diagonal (upper left) are expected to
50 * require special effort from the switcher (i.e. a bit slower).
51 *
52 *
53 *
54 *
55 * @section sec_pgm_shw The Shadow Memory Context
56 *
57 *
58 * [..]
59 *
60 * Because of guest context mappings requires PDPT and PML4 entries to allow
61 * writing on AMD64, the two upper levels will have fixed flags whatever the
62 * guest is thinking of using there. So, when shadowing the PD level we will
63 * calculate the effective flags of PD and all the higher levels. In legacy
64 * PAE mode this only applies to the PWT and PCD bits (the rest are
65 * ignored/reserved/MBZ). We will ignore those bits for the present.
66 *
67 *
68 *
69 * @section sec_pgm_int The Intermediate Memory Context
70 *
71 * The world switch goes thru an intermediate memory context which purpose it is
72 * to provide different mappings of the switcher code. All guest mappings are also
73 * present in this context.
74 *
75 * The switcher code is mapped at the same location as on the host, at an
76 * identity mapped location (physical equals virtual address), and at the
77 * hypervisor location.
78 *
79 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
80 * simplifies switching guest CPU mode and consistency at the cost of more
81 * code to do the work. All memory use for those page tables is located below
82 * 4GB (this includes page tables for guest context mappings).
83 *
84 *
85 * @subsection subsec_pgm_int_gc Guest Context Mappings
86 *
87 * During assignment and relocation of a guest context mapping the intermediate
88 * memory context is used to verify the new location.
89 *
90 * Guest context mappings are currently restricted to below 4GB, for reasons
91 * of simplicity. This may change when we implement AMD64 support.
92 *
93 *
94 *
95 *
96 * @section sec_pgm_misc Misc
97 *
98 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
99 *
100 * The differences between legacy PAE and long mode PAE are:
101 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
102 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
103 * usual meanings while 6 is ignored (AMD). This means that upon switching to
104 * legacy PAE mode we'll have to clear these bits and when going to long mode
105 * they must be set. This applies to both intermediate and shadow contexts,
106 * however we don't need to do it for the intermediate one since we're
107 * executing with CR0.WP at that time.
108 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
109 * a page aligned one is required.
110 *
111 *
112 * @section sec_pgm_handlers Access Handlers
113 *
114 * Placeholder.
115 *
116 *
117 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
118 *
119 * Placeholder.
120 *
121 *
122 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
123 *
124 * We currently implement three types of virtual access handlers: ALL, WRITE
125 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERTYPE for some more details.
126 *
127 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
128 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
129 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
130 * rest of this section is going to be about these handlers.
131 *
132 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
133 * how successfull this is gonna be...
134 *
135 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
136 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
137 * and create a new node that is inserted into the AVL tree (range key). Then
138 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
139 *
140 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
141 *
142 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
143 * via the current guest CR3 and update the physical page -> virtual handler
144 * translation. Needless to say, this doesn't exactly scale very well. If any changes
145 * are detected, it will flag a virtual bit update just like we did on registration.
146 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
147 *
148 * 2b. The virtual bit update process will iterate all the pages covered by all the
149 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
150 * virtual handlers on that page.
151 *
152 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
153 * we don't miss any alias mappings of the monitored pages.
154 *
155 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
156 *
157 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
158 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
159 * will call the handlers like in the next step. If the physical mapping has
160 * changed we will - some time in the future - perform a handler callback
161 * (optional) and update the physical -> virtual handler cache.
162 *
163 * 4. \#PF(,write) on a page in the range. This will cause the handler to
164 * be invoked.
165 *
166 * 5. The guest invalidates the page and changes the physical backing or
167 * unmaps it. This should cause the invalidation callback to be invoked
168 * (it might not yet be 100% perfect). Exactly what happens next... is
169 * this where we mess up and end up out of sync for a while?
170 *
171 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
172 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
173 * this handler to NONE and trigger a full PGM resync (basically the same
174 * as int step 1). Which means 2 is executed again.
175 *
176 *
177 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
178 *
179 * There is a bunch of things that needs to be done to make the virtual handlers
180 * work 100% correctly and work more efficiently.
181 *
182 * The first bit hasn't been implemented yet because it's going to slow the
183 * whole mess down even more, and besides it seems to be working reliably for
184 * our current uses. OTOH, some of the optimizations might end up more or less
185 * implementing the missing bits, so we'll see.
186 *
187 * On the optimization side, the first thing to do is to try avoid unnecessary
188 * cache flushing. Then try team up with the shadowing code to track changes
189 * in mappings by means of access to them (shadow in), updates to shadows pages,
190 * invlpg, and shadow PT discarding (perhaps).
191 *
192 * Some idea that have popped up for optimization for current and new features:
193 * - bitmap indicating where there are virtual handlers installed.
194 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
195 * - Further optimize this by min/max (needs min/max avl getters).
196 * - Shadow page table entry bit (if any left)?
197 *
198 */
199
200
201/** @page pg_pgmPhys PGMPhys - Physical Guest Memory Management.
202 *
203 *
204 * Objectives:
205 * - Guest RAM over-commitment using memory ballooning,
206 * zero pages and general page sharing.
207 * - Moving or mirroring a VM onto a different physical machine.
208 *
209 *
210 * @subsection subsec_pgmPhys_Definitions Definitions
211 *
212 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
213 * machinery assoicated with it.
214 *
215 *
216 *
217 *
218 * @subsection subsec_pgmPhys_AllocPage Allocating a page.
219 *
220 * Initially we map *all* guest memory to the (per VM) zero page, which
221 * means that none of the read functions will cause pages to be allocated.
222 *
223 * Exception, access bit in page tables that have been shared. This must
224 * be handled, but we must also make sure PGMGst*Modify doesn't make
225 * unnecessary modifications.
226 *
227 * Allocation points:
228 * - PGMPhysWriteGCPhys and PGMPhysWrite.
229 * - Replacing a zero page mapping at \#PF.
230 * - Replacing a shared page mapping at \#PF.
231 * - ROM registration (currently MMR3RomRegister).
232 * - VM restore (pgmR3Load).
233 *
234 * For the first three it would make sense to keep a few pages handy
235 * until we've reached the max memory commitment for the VM.
236 *
237 * For the ROM registration, we know exactly how many pages we need
238 * and will request these from ring-0. For restore, we will save
239 * the number of non-zero pages in the saved state and allocate
240 * them up front. This would allow the ring-0 component to refuse
241 * the request if the isn't sufficient memory available for VM use.
242 *
243 * Btw. for both ROM and restore allocations we won't be requiring
244 * zeroed pages as they are going to be filled instantly.
245 *
246 *
247 * @subsection subsec_pgmPhys_FreePage Freeing a page
248 *
249 * There are a few points where a page can be freed:
250 * - After being replaced by the zero page.
251 * - After being replaced by a shared page.
252 * - After being ballooned by the guest additions.
253 * - At reset.
254 * - At restore.
255 *
256 * When freeing one or more pages they will be returned to the ring-0
257 * component and replaced by the zero page.
258 *
259 * The reasoning for clearing out all the pages on reset is that it will
260 * return us to the exact same state as on power on, and may thereby help
261 * us reduce the memory load on the system. Further it might have a
262 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
263 *
264 * On restore, as mention under the allocation topic, pages should be
265 * freed / allocated depending on how many is actually required by the
266 * new VM state. The simplest approach is to do like on reset, and free
267 * all non-ROM pages and then allocate what we need.
268 *
269 * A measure to prevent some fragmentation, would be to let each allocation
270 * chunk have some affinity towards the VM having allocated the most pages
271 * from it. Also, try make sure to allocate from allocation chunks that
272 * are almost full. Admittedly, both these measures might work counter to
273 * our intentions and its probably not worth putting a lot of effort,
274 * cpu time or memory into this.
275 *
276 *
277 * @subsection subsec_pgmPhys_SharePage Sharing a page
278 *
279 * The basic idea is that there there will be a idle priority kernel
280 * thread walking the non-shared VM pages hashing them and looking for
281 * pages with the same checksum. If such pages are found, it will compare
282 * them byte-by-byte to see if they actually are identical. If found to be
283 * identical it will allocate a shared page, copy the content, check that
284 * the page didn't change while doing this, and finally request both the
285 * VMs to use the shared page instead. If the page is all zeros (special
286 * checksum and byte-by-byte check) it will request the VM that owns it
287 * to replace it with the zero page.
288 *
289 * To make this efficient, we will have to make sure not to try share a page
290 * that will change its contents soon. This part requires the most work.
291 * A simple idea would be to request the VM to write monitor the page for
292 * a while to make sure it isn't modified any time soon. Also, it may
293 * make sense to skip pages that are being write monitored since this
294 * information is readily available to the thread if it works on the
295 * per-VM guest memory structures (presently called PGMRAMRANGE).
296 *
297 *
298 * @subsection subsec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
299 *
300 * The pages are organized in allocation chunks in ring-0, this is a necessity
301 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
302 * could easily work on a page-by-page basis if we liked. Whether this is possible
303 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
304 * become a problem as part of the idea here is that we wish to return memory to
305 * the host system.
306 *
307 * For instance, starting two VMs at the same time, they will both allocate the
308 * guest memory on-demand and if permitted their page allocations will be
309 * intermixed. Shut down one of the two VMs and it will be difficult to return
310 * any memory to the host system because the page allocation for the two VMs are
311 * mixed up in the same allocation chunks.
312 *
313 * To further complicate matters, when pages are freed because they have been
314 * ballooned or become shared/zero the whole idea is that the page is supposed
315 * to be reused by another VM or returned to the host system. This will cause
316 * allocation chunks to contain pages belonging to different VMs and prevent
317 * returning memory to the host when one of those VM shuts down.
318 *
319 * The only way to really deal with this problem is to move pages. This can
320 * either be done at VM shutdown and or by the idle priority worker thread
321 * that will be responsible for finding sharable/zero pages. The mechanisms
322 * involved for coercing a VM to move a page (or to do it for it) will be
323 * the same as when telling it to share/zero a page.
324 *
325 *
326 * @subsection subsec_pgmPhys_Tracking Tracking Structures And Their Cost
327 *
328 * There's a difficult balance between keeping the per-page tracking structures
329 * (global and guest page) easy to use and keeping them from eating too much
330 * memory. We have limited virtual memory resources available when operating in
331 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
332 * tracking structures will be attemted designed such that we can deal with up
333 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
334 *
335 *
336 * @subsubsection subsubsec_pgmPhys_Tracking_Kernel Kernel Space
337 *
338 * @see pg_GMM
339 *
340 * @subsubsection subsubsec_pgmPhys_Tracking_PerVM Per-VM
341 *
342 * Fixed info is the physical address of the page (HCPhys) and the page id
343 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
344 * Today we've restricting ourselves to 40(-12) bits because this is the current
345 * restrictions of all AMD64 implementations (I think Barcelona will up this
346 * to 48(-12) bits, not that it really matters) and I needed the bits for
347 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
348 * decent range for the page id: 2^(28+12) = 1024TB.
349 *
350 * In additions to these, we'll have to keep maintaining the page flags as we
351 * currently do. Although it wouldn't harm to optimize these quite a bit, like
352 * for instance the ROM shouldn't depend on having a write handler installed
353 * in order for it to become read-only. A RO/RW bit should be considered so
354 * that the page syncing code doesn't have to mess about checking multiple
355 * flag combinations (ROM || RW handler || write monitored) in order to
356 * figure out how to setup a shadow PTE. But this of course, is second
357 * priority at present. Current this requires 12 bits, but could probably
358 * be optimized to ~8.
359 *
360 * Then there's the 24 bits used to track which shadow page tables are
361 * currently mapping a page for the purpose of speeding up physical
362 * access handlers, and thereby the page pool cache. More bit for this
363 * purpose wouldn't hurt IIRC.
364 *
365 * Then there is a new bit in which we need to record what kind of page
366 * this is, shared, zero, normal or write-monitored-normal. This'll
367 * require 2 bits. One bit might be needed for indicating whether a
368 * write monitored page has been written to. And yet another one or
369 * two for tracking migration status. 3-4 bits total then.
370 *
371 * Whatever is left will can be used to record the sharabilitiy of a
372 * page. The page checksum will not be stored in the per-VM table as
373 * the idle thread will not be permitted to do modifications to it.
374 * It will instead have to keep its own working set of potentially
375 * shareable pages and their check sums and stuff.
376 *
377 * For the present we'll keep the current packing of the
378 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
379 * we'll have to change it to a struct with a total of 128-bits at
380 * our disposal.
381 *
382 * The initial layout will be like this:
383 * @verbatim
384 RTHCPHYS HCPhys; The current stuff.
385 63:40 Current shadow PT tracking stuff.
386 39:12 The physical page frame number.
387 11:0 The current flags.
388 uint32_t u28PageId : 28; The page id.
389 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
390 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
391 uint32_t u1Reserved : 1; Reserved for later.
392 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
393 @endverbatim
394 *
395 * The final layout will be something like this:
396 * @verbatim
397 RTHCPHYS HCPhys; The current stuff.
398 63:48 High page id (12+).
399 47:12 The physical page frame number.
400 11:0 Low page id.
401 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
402 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
403 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
404 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
405 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
406 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
407 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
408 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
409 @endverbatim
410 *
411 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
412 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
413 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
414 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
415 *
416 * A couple of cost examples for the total cost per-VM + kernel.
417 * 32-bit Windows and 32-bit linux:
418 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
419 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
420 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
421 * 64-bit Windows and 64-bit linux:
422 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
423 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
424 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
425 *
426 * UPDATE - 2007-09-27:
427 * Will need a ballooned flag/state too because we cannot
428 * trust the guest 100% and reporting the same page as ballooned more
429 * than once will put the GMM off balance.
430 *
431 *
432 * @subsection subsec_pgmPhys_Serializing Serializing Access
433 *
434 * Initially, we'll try a simple scheme:
435 *
436 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
437 * by the EMT thread of that VM while in the pgm critsect.
438 * - Other threads in the VM process that needs to make reliable use of
439 * the per-VM RAM tracking structures will enter the critsect.
440 * - No process external thread or kernel thread will ever try enter
441 * the pgm critical section, as that just won't work.
442 * - The idle thread (and similar threads) doesn't not need 100% reliable
443 * data when performing it tasks as the EMT thread will be the one to
444 * do the actual changes later anyway. So, as long as it only accesses
445 * the main ram range, it can do so by somehow preventing the VM from
446 * being destroyed while it works on it...
447 *
448 * - The over-commitment management, including the allocating/freeing
449 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
450 * more mundane mutex implementation is broken on Linux).
451 * - A separeate mutex is protecting the set of allocation chunks so
452 * that pages can be shared or/and freed up while some other VM is
453 * allocating more chunks. This mutex can be take from under the other
454 * one, but not the otherway around.
455 *
456 *
457 * @subsection subsec_pgmPhys_Request VM Request interface
458 *
459 * When in ring-0 it will become necessary to send requests to a VM so it can
460 * for instance move a page while defragmenting during VM destroy. The idle
461 * thread will make use of this interface to request VMs to setup shared
462 * pages and to perform write monitoring of pages.
463 *
464 * I would propose an interface similar to the current VMReq interface, similar
465 * in that it doesn't require locking and that the one sending the request may
466 * wait for completion if it wishes to. This shouldn't be very difficult to
467 * realize.
468 *
469 * The requests themselves are also pretty simple. They are basically:
470 * -# Check that some precondition is still true.
471 * -# Do the update.
472 * -# Update all shadow page tables involved with the page.
473 *
474 * The 3rd step is identical to what we're already doing when updating a
475 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
476 *
477 *
478 *
479 * @section sec_pgmPhys_MappingCaches Mapping Caches
480 *
481 * In order to be able to map in and out memory and to be able to support
482 * guest with more RAM than we've got virtual address space, we'll employing
483 * a mapping cache. There is already a tiny one for GC (see PGMGCDynMapGCPageEx)
484 * and we'll create a similar one for ring-0 unless we decide to setup a dedicate
485 * memory context for the HWACCM execution.
486 *
487 *
488 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
489 *
490 * We've considered implementing the ring-3 mapping cache page based but found
491 * that this was bother some when one had to take into account TLBs+SMP and
492 * portability (missing the necessary APIs on several platforms). There were
493 * also some performance concerns with this approach which hadn't quite been
494 * worked out.
495 *
496 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
497 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
498 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
499 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
500 * costly than a single page, although how much more costly is uncertain. We'll
501 * try address this by using a very big cache, preferably bigger than the actual
502 * VM RAM size if possible. The current VM RAM sizes should give some idea for
503 * 32-bit boxes, while on 64-bit we can probably get away with employing an
504 * unlimited cache.
505 *
506 * The cache have to parts, as already indicated, the ring-3 side and the
507 * ring-0 side.
508 *
509 * The ring-0 will be tied to the page allocator since it will operate on the
510 * memory objects it contains. It will therefore require the first ring-0 mutex
511 * discussed in @ref subsec_pgmPhys_Serializing. We
512 * some double house keeping wrt to who has mapped what I think, since both
513 * VMMR0.r0 and RTR0MemObj will keep track of mapping relataions
514 *
515 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
516 * require anyone that desires to do changes to the mapping cache to do that
517 * from within this critsect. Alternatively, we could employ a separate critsect
518 * for serializing changes to the mapping cache as this would reduce potential
519 * contention with other threads accessing mappings unrelated to the changes
520 * that are in process. We can see about this later, contention will show
521 * up in the statistics anyway, so it'll be simple to tell.
522 *
523 * The organization of the ring-3 part will be very much like how the allocation
524 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
525 * having to walk the tree all the time, we'll have a couple of lookaside entries
526 * like in we do for I/O ports and MMIO in IOM.
527 *
528 * The simplified flow of a PGMPhysRead/Write function:
529 * -# Enter the PGM critsect.
530 * -# Lookup GCPhys in the ram ranges and get the Page ID.
531 * -# Calc the Allocation Chunk ID from the Page ID.
532 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
533 * If not found in cache:
534 * -# Call ring-0 and request it to be mapped and supply
535 * a chunk to be unmapped if the cache is maxed out already.
536 * -# Insert the new mapping into the AVL tree (id + R3 address).
537 * -# Update the relevant lookaside entry and return the mapping address.
538 * -# Do the read/write according to monitoring flags and everything.
539 * -# Leave the critsect.
540 *
541 *
542 * @section sec_pgmPhys_Fallback Fallback
543 *
544 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
545 * API and thus require a fallback.
546 *
547 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
548 * will return to the ring-3 caller (and later ring-0) and asking it to seed
549 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
550 * then perform an SUPPageAlloc(cbChunk >> PAGE_SHIFT) call and make a
551 * "SeededAllocPages" call to ring-0.
552 *
553 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
554 * all page sharing (zero page detection will continue). It will also force
555 * all allocations to come from the VM which seeded the page. Both these
556 * measures are taken to make sure that there will never be any need for
557 * mapping anything into ring-3 - everything will be mapped already.
558 *
559 * Whether we'll continue to use the current MM locked memory management
560 * for this I don't quite know (I'd prefer not to and just ditch that all
561 * togther), we'll see what's simplest to do.
562 *
563 *
564 *
565 * @section sec_pgmPhys_Changes Changes
566 *
567 * Breakdown of the changes involved?
568 */
569
570
571/** Saved state data unit version. */
572#define PGM_SAVED_STATE_VERSION 6
573
574/*******************************************************************************
575* Header Files *
576*******************************************************************************/
577#define LOG_GROUP LOG_GROUP_PGM
578#include <VBox/dbgf.h>
579#include <VBox/pgm.h>
580#include <VBox/cpum.h>
581#include <VBox/iom.h>
582#include <VBox/sup.h>
583#include <VBox/mm.h>
584#include <VBox/em.h>
585#include <VBox/stam.h>
586#include <VBox/rem.h>
587#include <VBox/dbgf.h>
588#include <VBox/rem.h>
589#include <VBox/selm.h>
590#include <VBox/ssm.h>
591#include "PGMInternal.h"
592#include <VBox/vm.h>
593#include <VBox/dbg.h>
594#include <VBox/hwaccm.h>
595
596#include <iprt/assert.h>
597#include <iprt/alloc.h>
598#include <iprt/asm.h>
599#include <iprt/thread.h>
600#include <iprt/string.h>
601#include <VBox/param.h>
602#include <VBox/err.h>
603
604
605
606/*******************************************************************************
607* Internal Functions *
608*******************************************************************************/
609static int pgmR3InitPaging(PVM pVM);
610static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
611static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
612static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
613static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
614static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
615static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
616#ifdef VBOX_STRICT
617static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser);
618#endif
619static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM);
620static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version);
621static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
622static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst);
623static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
624
625#ifdef VBOX_WITH_STATISTICS
626static void pgmR3InitStats(PVM pVM);
627#endif
628
629#ifdef VBOX_WITH_DEBUGGER
630/** @todo all but the two last commands must be converted to 'info'. */
631static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
632static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
633static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
634static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
635# ifdef VBOX_STRICT
636static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
637# endif
638#endif
639
640
641/*******************************************************************************
642* Global Variables *
643*******************************************************************************/
644#ifdef VBOX_WITH_DEBUGGER
645/** Command descriptors. */
646static const DBGCCMD g_aCmds[] =
647{
648 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, pResultDesc, fFlags, pfnHandler pszSyntax, ....pszDescription */
649 { "pgmram", 0, 0, NULL, 0, NULL, 0, pgmR3CmdRam, "", "Display the ram ranges." },
650 { "pgmmap", 0, 0, NULL, 0, NULL, 0, pgmR3CmdMap, "", "Display the mapping ranges." },
651 { "pgmsync", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
652#ifdef VBOX_STRICT
653 { "pgmassertcr3", 0, 0, NULL, 0, NULL, 0, pgmR3CmdAssertCR3, "", "Check the shadow CR3 mapping." },
654#endif
655 { "pgmsyncalways", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
656};
657#endif
658
659
660
661
662/*
663 * Shadow - 32-bit mode
664 */
665#define PGM_SHW_TYPE PGM_TYPE_32BIT
666#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
667#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_32BIT_STR(name)
668#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
669#include "PGMShw.h"
670
671/* Guest - real mode */
672#define PGM_GST_TYPE PGM_TYPE_REAL
673#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
674#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_REAL_STR(name)
675#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
676#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
677#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_32BIT_REAL_STR(name)
678#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
679#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
680#include "PGMGst.h"
681#include "PGMBth.h"
682#undef BTH_PGMPOOLKIND_PT_FOR_PT
683#undef PGM_BTH_NAME
684#undef PGM_BTH_NAME_GC_STR
685#undef PGM_BTH_NAME_R0_STR
686#undef PGM_GST_TYPE
687#undef PGM_GST_NAME
688#undef PGM_GST_NAME_GC_STR
689#undef PGM_GST_NAME_R0_STR
690
691/* Guest - protected mode */
692#define PGM_GST_TYPE PGM_TYPE_PROT
693#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
694#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PROT_STR(name)
695#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
696#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
697#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_32BIT_PROT_STR(name)
698#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
699#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
700#include "PGMGst.h"
701#include "PGMBth.h"
702#undef BTH_PGMPOOLKIND_PT_FOR_PT
703#undef PGM_BTH_NAME
704#undef PGM_BTH_NAME_GC_STR
705#undef PGM_BTH_NAME_R0_STR
706#undef PGM_GST_TYPE
707#undef PGM_GST_NAME
708#undef PGM_GST_NAME_GC_STR
709#undef PGM_GST_NAME_R0_STR
710
711/* Guest - 32-bit mode */
712#define PGM_GST_TYPE PGM_TYPE_32BIT
713#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
714#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_32BIT_STR(name)
715#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
716#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
717#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_32BIT_32BIT_STR(name)
718#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
719#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
720#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
721#include "PGMGst.h"
722#include "PGMBth.h"
723#undef BTH_PGMPOOLKIND_PT_FOR_BIG
724#undef BTH_PGMPOOLKIND_PT_FOR_PT
725#undef PGM_BTH_NAME
726#undef PGM_BTH_NAME_GC_STR
727#undef PGM_BTH_NAME_R0_STR
728#undef PGM_GST_TYPE
729#undef PGM_GST_NAME
730#undef PGM_GST_NAME_GC_STR
731#undef PGM_GST_NAME_R0_STR
732
733#undef PGM_SHW_TYPE
734#undef PGM_SHW_NAME
735#undef PGM_SHW_NAME_GC_STR
736#undef PGM_SHW_NAME_R0_STR
737
738
739/*
740 * Shadow - PAE mode
741 */
742#define PGM_SHW_TYPE PGM_TYPE_PAE
743#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
744#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_PAE_STR(name)
745#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
746#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
747#include "PGMShw.h"
748
749/* Guest - real mode */
750#define PGM_GST_TYPE PGM_TYPE_REAL
751#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
752#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_REAL_STR(name)
753#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
754#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
755#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_REAL_STR(name)
756#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
757#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
758#include "PGMBth.h"
759#undef BTH_PGMPOOLKIND_PT_FOR_PT
760#undef PGM_BTH_NAME
761#undef PGM_BTH_NAME_GC_STR
762#undef PGM_BTH_NAME_R0_STR
763#undef PGM_GST_TYPE
764#undef PGM_GST_NAME
765#undef PGM_GST_NAME_GC_STR
766#undef PGM_GST_NAME_R0_STR
767
768/* Guest - protected mode */
769#define PGM_GST_TYPE PGM_TYPE_PROT
770#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
771#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PROT_STR(name)
772#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
773#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
774#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_PROT_STR(name)
775#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
776#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
777#include "PGMBth.h"
778#undef BTH_PGMPOOLKIND_PT_FOR_PT
779#undef PGM_BTH_NAME
780#undef PGM_BTH_NAME_GC_STR
781#undef PGM_BTH_NAME_R0_STR
782#undef PGM_GST_TYPE
783#undef PGM_GST_NAME
784#undef PGM_GST_NAME_GC_STR
785#undef PGM_GST_NAME_R0_STR
786
787/* Guest - 32-bit mode */
788#define PGM_GST_TYPE PGM_TYPE_32BIT
789#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
790#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_32BIT_STR(name)
791#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
792#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
793#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_32BIT_STR(name)
794#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
795#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
796#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
797#include "PGMBth.h"
798#undef BTH_PGMPOOLKIND_PT_FOR_BIG
799#undef BTH_PGMPOOLKIND_PT_FOR_PT
800#undef PGM_BTH_NAME
801#undef PGM_BTH_NAME_GC_STR
802#undef PGM_BTH_NAME_R0_STR
803#undef PGM_GST_TYPE
804#undef PGM_GST_NAME
805#undef PGM_GST_NAME_GC_STR
806#undef PGM_GST_NAME_R0_STR
807
808/* Guest - PAE mode */
809#define PGM_GST_TYPE PGM_TYPE_PAE
810#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
811#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PAE_STR(name)
812#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
813#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
814#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_PAE_STR(name)
815#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
816#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
817#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
818#include "PGMGst.h"
819#include "PGMBth.h"
820#undef BTH_PGMPOOLKIND_PT_FOR_BIG
821#undef BTH_PGMPOOLKIND_PT_FOR_PT
822#undef PGM_BTH_NAME
823#undef PGM_BTH_NAME_GC_STR
824#undef PGM_BTH_NAME_R0_STR
825#undef PGM_GST_TYPE
826#undef PGM_GST_NAME
827#undef PGM_GST_NAME_GC_STR
828#undef PGM_GST_NAME_R0_STR
829
830#undef PGM_SHW_TYPE
831#undef PGM_SHW_NAME
832#undef PGM_SHW_NAME_GC_STR
833#undef PGM_SHW_NAME_R0_STR
834
835
836/*
837 * Shadow - AMD64 mode
838 */
839#define PGM_SHW_TYPE PGM_TYPE_AMD64
840#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
841#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_AMD64_STR(name)
842#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
843#include "PGMShw.h"
844
845/* Guest - AMD64 mode */
846#define PGM_GST_TYPE PGM_TYPE_AMD64
847#define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
848#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_AMD64_STR(name)
849#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
850#define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
851#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_AMD64_AMD64_STR(name)
852#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
853#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
854#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
855#include "PGMGst.h"
856#include "PGMBth.h"
857#undef BTH_PGMPOOLKIND_PT_FOR_BIG
858#undef BTH_PGMPOOLKIND_PT_FOR_PT
859#undef PGM_BTH_NAME
860#undef PGM_BTH_NAME_GC_STR
861#undef PGM_BTH_NAME_R0_STR
862#undef PGM_GST_TYPE
863#undef PGM_GST_NAME
864#undef PGM_GST_NAME_GC_STR
865#undef PGM_GST_NAME_R0_STR
866
867#undef PGM_SHW_TYPE
868#undef PGM_SHW_NAME
869#undef PGM_SHW_NAME_GC_STR
870#undef PGM_SHW_NAME_R0_STR
871
872/*
873 * Shadow - Nested paging mode
874 */
875#define PGM_SHW_TYPE PGM_TYPE_NESTED
876#define PGM_SHW_NAME(name) PGM_SHW_NAME_NESTED(name)
877#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_NESTED_STR(name)
878#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_NESTED_STR(name)
879#include "PGMShw.h"
880
881/* Guest - real mode */
882#define PGM_GST_TYPE PGM_TYPE_REAL
883#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
884#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_REAL_STR(name)
885#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
886#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_REAL(name)
887#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_NESTED_REAL_STR(name)
888#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_REAL_STR(name)
889#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
890#include "PGMBth.h"
891#undef BTH_PGMPOOLKIND_PT_FOR_PT
892#undef PGM_BTH_NAME
893#undef PGM_BTH_NAME_GC_STR
894#undef PGM_BTH_NAME_R0_STR
895#undef PGM_GST_TYPE
896#undef PGM_GST_NAME
897#undef PGM_GST_NAME_GC_STR
898#undef PGM_GST_NAME_R0_STR
899
900/* Guest - protected mode */
901#define PGM_GST_TYPE PGM_TYPE_PROT
902#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
903#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PROT_STR(name)
904#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
905#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PROT(name)
906#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_NESTED_PROT_STR(name)
907#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PROT_STR(name)
908#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
909#include "PGMBth.h"
910#undef BTH_PGMPOOLKIND_PT_FOR_PT
911#undef PGM_BTH_NAME
912#undef PGM_BTH_NAME_GC_STR
913#undef PGM_BTH_NAME_R0_STR
914#undef PGM_GST_TYPE
915#undef PGM_GST_NAME
916#undef PGM_GST_NAME_GC_STR
917#undef PGM_GST_NAME_R0_STR
918
919/* Guest - 32-bit mode */
920#define PGM_GST_TYPE PGM_TYPE_32BIT
921#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
922#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_32BIT_STR(name)
923#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
924#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_32BIT(name)
925#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_NESTED_32BIT_STR(name)
926#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_32BIT_STR(name)
927#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
928#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
929#include "PGMBth.h"
930#undef BTH_PGMPOOLKIND_PT_FOR_BIG
931#undef BTH_PGMPOOLKIND_PT_FOR_PT
932#undef PGM_BTH_NAME
933#undef PGM_BTH_NAME_GC_STR
934#undef PGM_BTH_NAME_R0_STR
935#undef PGM_GST_TYPE
936#undef PGM_GST_NAME
937#undef PGM_GST_NAME_GC_STR
938#undef PGM_GST_NAME_R0_STR
939
940/* Guest - PAE mode */
941#define PGM_GST_TYPE PGM_TYPE_PAE
942#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
943#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PAE_STR(name)
944#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
945#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PAE(name)
946#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_NESTED_PAE_STR(name)
947#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PAE_STR(name)
948#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
949#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
950#include "PGMBth.h"
951#undef BTH_PGMPOOLKIND_PT_FOR_BIG
952#undef BTH_PGMPOOLKIND_PT_FOR_PT
953#undef PGM_BTH_NAME
954#undef PGM_BTH_NAME_GC_STR
955#undef PGM_BTH_NAME_R0_STR
956#undef PGM_GST_TYPE
957#undef PGM_GST_NAME
958#undef PGM_GST_NAME_GC_STR
959#undef PGM_GST_NAME_R0_STR
960
961/* Guest - AMD64 mode */
962#define PGM_GST_TYPE PGM_TYPE_AMD64
963#define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
964#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_AMD64_STR(name)
965#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
966#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_AMD64(name)
967#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_NESTED_AMD64_STR(name)
968#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_AMD64_STR(name)
969#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
970#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
971#include "PGMBth.h"
972#undef BTH_PGMPOOLKIND_PT_FOR_BIG
973#undef BTH_PGMPOOLKIND_PT_FOR_PT
974#undef PGM_BTH_NAME
975#undef PGM_BTH_NAME_GC_STR
976#undef PGM_BTH_NAME_R0_STR
977#undef PGM_GST_TYPE
978#undef PGM_GST_NAME
979#undef PGM_GST_NAME_GC_STR
980#undef PGM_GST_NAME_R0_STR
981
982#undef PGM_SHW_TYPE
983#undef PGM_SHW_NAME
984#undef PGM_SHW_NAME_GC_STR
985#undef PGM_SHW_NAME_R0_STR
986
987
988#ifdef PGM_WITH_EPT
989/*
990 * Shadow - EPT
991 */
992#define PGM_SHW_TYPE PGM_TYPE_EPT
993#define PGM_SHW_NAME(name) PGM_SHW_NAME_EPT(name)
994#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_EPT_STR(name)
995#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_EPT_STR(name)
996#include "PGMShw.h"
997
998/* Guest - real mode */
999#define PGM_GST_TYPE PGM_TYPE_REAL
1000#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
1001#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_REAL_STR(name)
1002#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
1003#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_REAL(name)
1004#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_EPT_REAL_STR(name)
1005#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_REAL_STR(name)
1006#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1007#include "PGMBth.h"
1008#undef BTH_PGMPOOLKIND_PT_FOR_PT
1009#undef PGM_BTH_NAME
1010#undef PGM_BTH_NAME_GC_STR
1011#undef PGM_BTH_NAME_R0_STR
1012#undef PGM_GST_TYPE
1013#undef PGM_GST_NAME
1014#undef PGM_GST_NAME_GC_STR
1015#undef PGM_GST_NAME_R0_STR
1016
1017/* Guest - protected mode */
1018#define PGM_GST_TYPE PGM_TYPE_PROT
1019#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1020#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PROT_STR(name)
1021#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1022#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PROT(name)
1023#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_EPT_PROT_STR(name)
1024#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PROT_STR(name)
1025#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1026#include "PGMBth.h"
1027#undef BTH_PGMPOOLKIND_PT_FOR_PT
1028#undef PGM_BTH_NAME
1029#undef PGM_BTH_NAME_GC_STR
1030#undef PGM_BTH_NAME_R0_STR
1031#undef PGM_GST_TYPE
1032#undef PGM_GST_NAME
1033#undef PGM_GST_NAME_GC_STR
1034#undef PGM_GST_NAME_R0_STR
1035
1036/* Guest - 32-bit mode */
1037#define PGM_GST_TYPE PGM_TYPE_32BIT
1038#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1039#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_32BIT_STR(name)
1040#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1041#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_32BIT(name)
1042#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_EPT_32BIT_STR(name)
1043#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_32BIT_STR(name)
1044#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1045#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1046#include "PGMBth.h"
1047#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1048#undef BTH_PGMPOOLKIND_PT_FOR_PT
1049#undef PGM_BTH_NAME
1050#undef PGM_BTH_NAME_GC_STR
1051#undef PGM_BTH_NAME_R0_STR
1052#undef PGM_GST_TYPE
1053#undef PGM_GST_NAME
1054#undef PGM_GST_NAME_GC_STR
1055#undef PGM_GST_NAME_R0_STR
1056
1057/* Guest - PAE mode */
1058#define PGM_GST_TYPE PGM_TYPE_PAE
1059#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1060#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PAE_STR(name)
1061#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1062#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PAE(name)
1063#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_EPT_PAE_STR(name)
1064#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PAE_STR(name)
1065#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1066#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1067#include "PGMBth.h"
1068#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1069#undef BTH_PGMPOOLKIND_PT_FOR_PT
1070#undef PGM_BTH_NAME
1071#undef PGM_BTH_NAME_GC_STR
1072#undef PGM_BTH_NAME_R0_STR
1073#undef PGM_GST_TYPE
1074#undef PGM_GST_NAME
1075#undef PGM_GST_NAME_GC_STR
1076#undef PGM_GST_NAME_R0_STR
1077
1078/* Guest - AMD64 mode */
1079#define PGM_GST_TYPE PGM_TYPE_AMD64
1080#define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1081#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_AMD64_STR(name)
1082#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1083#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_AMD64(name)
1084#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_EPT_AMD64_STR(name)
1085#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_AMD64_STR(name)
1086#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1087#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1088#include "PGMBth.h"
1089#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1090#undef BTH_PGMPOOLKIND_PT_FOR_PT
1091#undef PGM_BTH_NAME
1092#undef PGM_BTH_NAME_GC_STR
1093#undef PGM_BTH_NAME_R0_STR
1094#undef PGM_GST_TYPE
1095#undef PGM_GST_NAME
1096#undef PGM_GST_NAME_GC_STR
1097#undef PGM_GST_NAME_R0_STR
1098
1099#undef PGM_SHW_TYPE
1100#undef PGM_SHW_NAME
1101#undef PGM_SHW_NAME_GC_STR
1102#undef PGM_SHW_NAME_R0_STR
1103#endif /* PGM_WITH_EPT */
1104
1105/**
1106 * Initiates the paging of VM.
1107 *
1108 * @returns VBox status code.
1109 * @param pVM Pointer to VM structure.
1110 */
1111PGMR3DECL(int) PGMR3Init(PVM pVM)
1112{
1113 LogFlow(("PGMR3Init:\n"));
1114
1115 /*
1116 * Assert alignment and sizes.
1117 */
1118 AssertRelease(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
1119
1120 /*
1121 * Init the structure.
1122 */
1123 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
1124 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1125 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1126 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1127 pVM->pgm.s.GCPhysCR3 = NIL_RTGCPHYS;
1128 pVM->pgm.s.GCPhysGstCR3Monitored = NIL_RTGCPHYS;
1129 pVM->pgm.s.fA20Enabled = true;
1130 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1; /* default; checked later */
1131 pVM->pgm.s.pGstPaePDPTHC = NULL;
1132 pVM->pgm.s.pGstPaePDPTGC = 0;
1133 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apGstPaePDsHC); i++)
1134 {
1135 pVM->pgm.s.apGstPaePDsHC[i] = NULL;
1136 pVM->pgm.s.apGstPaePDsGC[i] = 0;
1137 pVM->pgm.s.aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
1138 pVM->pgm.s.aGCPhysGstPaePDsMonitored[i] = NIL_RTGCPHYS;
1139 }
1140
1141#ifdef VBOX_STRICT
1142 VMR3AtStateRegister(pVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
1143#endif
1144
1145 /*
1146 * Get the configured RAM size - to estimate saved state size.
1147 */
1148 uint64_t cbRam;
1149 int rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
1150 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
1151 cbRam = pVM->pgm.s.cbRamSize = 0;
1152 else if (VBOX_SUCCESS(rc))
1153 {
1154 if (cbRam < PAGE_SIZE)
1155 cbRam = 0;
1156 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
1157 pVM->pgm.s.cbRamSize = (RTUINT)cbRam;
1158 }
1159 else
1160 {
1161 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Vrc.\n", rc));
1162 return rc;
1163 }
1164
1165 /*
1166 * Register saved state data unit.
1167 */
1168 rc = SSMR3RegisterInternal(pVM, "pgm", 1, PGM_SAVED_STATE_VERSION, (size_t)cbRam + sizeof(PGM),
1169 NULL, pgmR3Save, NULL,
1170 NULL, pgmR3Load, NULL);
1171 if (VBOX_FAILURE(rc))
1172 return rc;
1173
1174 /*
1175 * Initialize the PGM critical section and flush the phys TLBs
1176 */
1177 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSect, "PGM");
1178 AssertRCReturn(rc, rc);
1179
1180 PGMR3PhysChunkInvalidateTLB(pVM);
1181 PGMPhysInvalidatePageR3MapTLB(pVM);
1182 PGMPhysInvalidatePageR0MapTLB(pVM);
1183 PGMPhysInvalidatePageGCMapTLB(pVM);
1184
1185 /*
1186 * Trees
1187 */
1188 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesHC);
1189 if (VBOX_SUCCESS(rc))
1190 {
1191 pVM->pgm.s.pTreesGC = MMHyperHC2GC(pVM, pVM->pgm.s.pTreesHC);
1192
1193 /*
1194 * Alocate the zero page.
1195 */
1196 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
1197 }
1198 if (VBOX_SUCCESS(rc))
1199 {
1200 pVM->pgm.s.pvZeroPgGC = MMHyperR3ToRC(pVM, pVM->pgm.s.pvZeroPgR3);
1201 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1202 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTHCPHYS);
1203 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
1204 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
1205
1206 /*
1207 * Init the paging.
1208 */
1209 rc = pgmR3InitPaging(pVM);
1210 }
1211 if (VBOX_SUCCESS(rc))
1212 {
1213 /*
1214 * Init the page pool.
1215 */
1216 rc = pgmR3PoolInit(pVM);
1217 }
1218 if (VBOX_SUCCESS(rc))
1219 {
1220 /*
1221 * Info & statistics
1222 */
1223 DBGFR3InfoRegisterInternal(pVM, "mode",
1224 "Shows the current paging mode. "
1225 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing's given.",
1226 pgmR3InfoMode);
1227 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
1228 "Dumps all the entries in the top level paging table. No arguments.",
1229 pgmR3InfoCr3);
1230 DBGFR3InfoRegisterInternal(pVM, "phys",
1231 "Dumps all the physical address ranges. No arguments.",
1232 pgmR3PhysInfo);
1233 DBGFR3InfoRegisterInternal(pVM, "handlers",
1234 "Dumps physical, virtual and hyper virtual handlers. "
1235 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
1236 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
1237 pgmR3InfoHandlers);
1238 DBGFR3InfoRegisterInternal(pVM, "mappings",
1239 "Dumps guest mappings.",
1240 pgmR3MapInfo);
1241
1242 STAM_REL_REG(pVM, &pVM->pgm.s.cGuestModeChanges, STAMTYPE_COUNTER, "/PGM/cGuestModeChanges", STAMUNIT_OCCURENCES, "Number of guest mode changes.");
1243#ifdef VBOX_WITH_STATISTICS
1244 pgmR3InitStats(pVM);
1245#endif
1246#ifdef VBOX_WITH_DEBUGGER
1247 /*
1248 * Debugger commands.
1249 */
1250 static bool fRegisteredCmds = false;
1251 if (!fRegisteredCmds)
1252 {
1253 int rc = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
1254 if (VBOX_SUCCESS(rc))
1255 fRegisteredCmds = true;
1256 }
1257#endif
1258 return VINF_SUCCESS;
1259 }
1260
1261 /* Almost no cleanup necessary, MM frees all memory. */
1262 PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
1263
1264 return rc;
1265}
1266
1267
1268/**
1269 * Init paging.
1270 *
1271 * Since we need to check what mode the host is operating in before we can choose
1272 * the right paging functions for the host we have to delay this until R0 has
1273 * been initialized.
1274 *
1275 * @returns VBox status code.
1276 * @param pVM VM handle.
1277 */
1278static int pgmR3InitPaging(PVM pVM)
1279{
1280 /*
1281 * Force a recalculation of modes and switcher so everyone gets notified.
1282 */
1283 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1284 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1285 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1286
1287 /*
1288 * Allocate static mapping space for whatever the cr3 register
1289 * points to and in the case of PAE mode to the 4 PDs.
1290 */
1291 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1292 if (VBOX_FAILURE(rc))
1293 {
1294 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Vrc\n", rc));
1295 return rc;
1296 }
1297 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1298
1299 /*
1300 * Allocate pages for the three possible intermediate contexts
1301 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1302 * for the sake of simplicity. The AMD64 uses the PAE for the
1303 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1304 *
1305 * We assume that two page tables will be enought for the core code
1306 * mappings (HC virtual and identity).
1307 */
1308 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM);
1309 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM);
1310 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM);
1311 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM);
1312 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM);
1313 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1314 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1315 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1316 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1317 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1318 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM);
1319 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM);
1320 if ( !pVM->pgm.s.pInterPD
1321 || !pVM->pgm.s.apInterPTs[0]
1322 || !pVM->pgm.s.apInterPTs[1]
1323 || !pVM->pgm.s.apInterPaePTs[0]
1324 || !pVM->pgm.s.apInterPaePTs[1]
1325 || !pVM->pgm.s.apInterPaePDs[0]
1326 || !pVM->pgm.s.apInterPaePDs[1]
1327 || !pVM->pgm.s.apInterPaePDs[2]
1328 || !pVM->pgm.s.apInterPaePDs[3]
1329 || !pVM->pgm.s.pInterPaePDPT
1330 || !pVM->pgm.s.pInterPaePDPT64
1331 || !pVM->pgm.s.pInterPaePML4)
1332 {
1333 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1334 return VERR_NO_PAGE_MEMORY;
1335 }
1336
1337 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1338 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1339 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1340 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1341 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1342 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK));
1343
1344 /*
1345 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1346 */
1347 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1348 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1349 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1350
1351 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1352 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1353
1354 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1355 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1356 {
1357 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1358 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1359 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1360 }
1361
1362 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1363 {
1364 const unsigned iPD = i % RT_ELEMENTS(pVM->pgm.s.apInterPaePDs);
1365 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1366 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1367 }
1368
1369 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1370 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1371 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1372 | HCPhysInterPaePDPT64;
1373
1374 /*
1375 * Allocate pages for the three possible guest contexts (AMD64, PAE and plain 32-Bit).
1376 * We allocate pages for all three posibilities to in order to simplify mappings and
1377 * avoid resource failure during mode switches. So, we need to cover all levels of the
1378 * of the first 4GB down to PD level.
1379 * As with the intermediate context, AMD64 uses the PAE PDPT and PDs.
1380 */
1381 pVM->pgm.s.pHC32BitPD = (PX86PD)MMR3PageAllocLow(pVM);
1382 pVM->pgm.s.apHCPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1383 pVM->pgm.s.apHCPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1384 AssertRelease((uintptr_t)pVM->pgm.s.apHCPaePDs[0] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apHCPaePDs[1]);
1385 pVM->pgm.s.apHCPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1386 AssertRelease((uintptr_t)pVM->pgm.s.apHCPaePDs[1] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apHCPaePDs[2]);
1387 pVM->pgm.s.apHCPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1388 AssertRelease((uintptr_t)pVM->pgm.s.apHCPaePDs[2] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apHCPaePDs[3]);
1389 pVM->pgm.s.pHCPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1390 pVM->pgm.s.pHCNestedRoot = MMR3PageAllocLow(pVM);
1391
1392 if ( !pVM->pgm.s.pHC32BitPD
1393 || !pVM->pgm.s.apHCPaePDs[0]
1394 || !pVM->pgm.s.apHCPaePDs[1]
1395 || !pVM->pgm.s.apHCPaePDs[2]
1396 || !pVM->pgm.s.apHCPaePDs[3]
1397 || !pVM->pgm.s.pHCPaePDPT
1398 || !pVM->pgm.s.pHCNestedRoot)
1399 {
1400 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1401 return VERR_NO_PAGE_MEMORY;
1402 }
1403
1404 /* get physical addresses. */
1405 pVM->pgm.s.HCPhys32BitPD = MMPage2Phys(pVM, pVM->pgm.s.pHC32BitPD);
1406 Assert(MMPagePhys2Page(pVM, pVM->pgm.s.HCPhys32BitPD) == pVM->pgm.s.pHC32BitPD);
1407 pVM->pgm.s.aHCPhysPaePDs[0] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[0]);
1408 pVM->pgm.s.aHCPhysPaePDs[1] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[1]);
1409 pVM->pgm.s.aHCPhysPaePDs[2] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[2]);
1410 pVM->pgm.s.aHCPhysPaePDs[3] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[3]);
1411 pVM->pgm.s.HCPhysPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pHCPaePDPT);
1412 pVM->pgm.s.HCPhysNestedRoot = MMPage2Phys(pVM, pVM->pgm.s.pHCNestedRoot);
1413
1414 /*
1415 * Initialize the pages, setting up the PML4 and PDPT for action below 4GB.
1416 */
1417 ASMMemZero32(pVM->pgm.s.pHC32BitPD, PAGE_SIZE);
1418 ASMMemZero32(pVM->pgm.s.pHCPaePDPT, PAGE_SIZE);
1419 ASMMemZero32(pVM->pgm.s.pHCNestedRoot, PAGE_SIZE);
1420 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apHCPaePDs); i++)
1421 {
1422 ASMMemZero32(pVM->pgm.s.apHCPaePDs[i], PAGE_SIZE);
1423 pVM->pgm.s.pHCPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT | pVM->pgm.s.aHCPhysPaePDs[i];
1424 /* The flags will be corrected when entering and leaving long mode. */
1425 }
1426
1427 CPUMSetHyperCR3(pVM, (uint32_t)pVM->pgm.s.HCPhys32BitPD);
1428
1429 /*
1430 * Initialize paging workers and mode from current host mode
1431 * and the guest running in real mode.
1432 */
1433 pVM->pgm.s.enmHostMode = SUPGetPagingMode();
1434 switch (pVM->pgm.s.enmHostMode)
1435 {
1436 case SUPPAGINGMODE_32_BIT:
1437 case SUPPAGINGMODE_32_BIT_GLOBAL:
1438 case SUPPAGINGMODE_PAE:
1439 case SUPPAGINGMODE_PAE_GLOBAL:
1440 case SUPPAGINGMODE_PAE_NX:
1441 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1442 break;
1443
1444 case SUPPAGINGMODE_AMD64:
1445 case SUPPAGINGMODE_AMD64_GLOBAL:
1446 case SUPPAGINGMODE_AMD64_NX:
1447 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1448#ifndef VBOX_WITH_HYBIRD_32BIT_KERNEL
1449 if (ARCH_BITS != 64)
1450 {
1451 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1452 LogRel(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1453 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1454 }
1455#endif
1456 break;
1457 default:
1458 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1459 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1460 }
1461 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1462 if (VBOX_SUCCESS(rc))
1463 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
1464 if (VBOX_SUCCESS(rc))
1465 {
1466 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1467#if HC_ARCH_BITS == 64
1468 LogRel(("Debug: HCPhys32BitPD=%VHp aHCPhysPaePDs={%VHp,%VHp,%VHp,%VHp} HCPhysPaePDPT=%VHp HCPhysPaePML4=%VHp\n",
1469 pVM->pgm.s.HCPhys32BitPD, pVM->pgm.s.aHCPhysPaePDs[0], pVM->pgm.s.aHCPhysPaePDs[1], pVM->pgm.s.aHCPhysPaePDs[2], pVM->pgm.s.aHCPhysPaePDs[3],
1470 pVM->pgm.s.HCPhysPaePDPT, pVM->pgm.s.HCPhysPaePML4));
1471 LogRel(("Debug: HCPhysInterPD=%VHp HCPhysInterPaePDPT=%VHp HCPhysInterPaePML4=%VHp\n",
1472 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1473 LogRel(("Debug: apInterPTs={%VHp,%VHp} apInterPaePTs={%VHp,%VHp} apInterPaePDs={%VHp,%VHp,%VHp,%VHp} pInterPaePDPT64=%VHp\n",
1474 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1475 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1476 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1477 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1478#endif
1479
1480 return VINF_SUCCESS;
1481 }
1482
1483 LogFlow(("pgmR3InitPaging: returns %Vrc\n", rc));
1484 return rc;
1485}
1486
1487
1488#ifdef VBOX_WITH_STATISTICS
1489/**
1490 * Init statistics
1491 */
1492static void pgmR3InitStats(PVM pVM)
1493{
1494 PPGM pPGM = &pVM->pgm.s;
1495 STAM_REG(pVM, &pPGM->StatGCInvalidatePage, STAMTYPE_PROFILE, "/PGM/GC/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMGCInvalidatePage() profiling.");
1496 STAM_REG(pVM, &pPGM->StatGCInvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a 4KB page.");
1497 STAM_REG(pVM, &pPGM->StatGCInvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a 4MB page.");
1498 STAM_REG(pVM, &pPGM->StatGCInvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() skipped a 4MB page.");
1499 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a page directory containing mappings (no conflict).");
1500 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a not accessed page directory.");
1501 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a not present page directory.");
1502 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for an out of sync page directory.");
1503 STAM_REG(pVM, &pPGM->StatGCInvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1504 STAM_REG(pVM, &pPGM->StatGCSyncPT, STAMTYPE_PROFILE, "/PGM/GC/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGCSyncPT() body.");
1505 STAM_REG(pVM, &pPGM->StatGCAccessedPage, STAMTYPE_COUNTER, "/PGM/GC/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1506 STAM_REG(pVM, &pPGM->StatGCDirtyPage, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1507 STAM_REG(pVM, &pPGM->StatGCDirtyPageBig, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1508 STAM_REG(pVM, &pPGM->StatGCDirtyPageTrap, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1509 STAM_REG(pVM, &pPGM->StatGCDirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1510 STAM_REG(pVM, &pPGM->StatGCDirtiedPage, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/SetDirty", STAMUNIT_OCCURENCES, "The number of pages marked dirty because of write accesses.");
1511 STAM_REG(pVM, &pPGM->StatGCDirtyTrackRealPF, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/RealPF", STAMUNIT_OCCURENCES, "The number of real pages faults during dirty bit tracking.");
1512 STAM_REG(pVM, &pPGM->StatGCPageAlreadyDirty, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/AlreadySet", STAMUNIT_OCCURENCES, "The number of pages already marked dirty because of write accesses.");
1513 STAM_REG(pVM, &pPGM->StatGCDirtyBitTracking, STAMTYPE_PROFILE, "/PGM/GC/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMTrackDirtyBit() body.");
1514 STAM_REG(pVM, &pPGM->StatGCSyncPTAlloc, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/Alloc", STAMUNIT_OCCURENCES, "The number of times PGMGCSyncPT() needed to allocate page tables.");
1515 STAM_REG(pVM, &pPGM->StatGCSyncPTConflict, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/Conflicts", STAMUNIT_OCCURENCES, "The number of times PGMGCSyncPT() detected conflicts.");
1516 STAM_REG(pVM, &pPGM->StatGCSyncPTFailed, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/Failed", STAMUNIT_OCCURENCES, "The number of times PGMGCSyncPT() failed.");
1517
1518 STAM_REG(pVM, &pPGM->StatGCTrap0e, STAMTYPE_PROFILE, "/PGM/GC/Trap0e", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGCTrap0eHandler() body.");
1519 STAM_REG(pVM, &pPGM->StatCheckPageFault, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/CheckPageFault", STAMUNIT_TICKS_PER_CALL, "Profiling of checking for dirty/access emulation faults.");
1520 STAM_REG(pVM, &pPGM->StatLazySyncPT, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of lazy page table syncing.");
1521 STAM_REG(pVM, &pPGM->StatMapping, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/Mapping", STAMUNIT_TICKS_PER_CALL, "Profiling of checking virtual mappings.");
1522 STAM_REG(pVM, &pPGM->StatOutOfSync, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of out of sync page handling.");
1523 STAM_REG(pVM, &pPGM->StatHandlers, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of checking handlers.");
1524 STAM_REG(pVM, &pPGM->StatEIPHandlers, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/EIPHandlers", STAMUNIT_TICKS_PER_CALL, "Profiling of checking eip handlers.");
1525 STAM_REG(pVM, &pPGM->StatTrap0eCSAM, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/CSAM", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is CSAM.");
1526 STAM_REG(pVM, &pPGM->StatTrap0eDirtyAndAccessedBits, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/DirtyAndAccessedBits", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1527 STAM_REG(pVM, &pPGM->StatTrap0eGuestTrap, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/GuestTrap", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1528 STAM_REG(pVM, &pPGM->StatTrap0eHndPhys, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/HandlerPhysical", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1529 STAM_REG(pVM, &pPGM->StatTrap0eHndVirt, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/HandlerVirtual",STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1530 STAM_REG(pVM, &pPGM->StatTrap0eHndUnhandled, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/HandlerUnhandled", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1531 STAM_REG(pVM, &pPGM->StatTrap0eMisc, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/Misc", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is not known.");
1532 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSync, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1533 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSyncHndPhys, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSyncHndPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1534 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSyncHndVirt, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSyncHndVirt", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1535 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSyncObsHnd, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSyncObsHnd", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1536 STAM_REG(pVM, &pPGM->StatTrap0eSyncPT, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1537
1538 STAM_REG(pVM, &pPGM->StatTrap0eMapHandler, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Mapping", STAMUNIT_OCCURENCES, "Number of traps due to access handlers in mappings.");
1539 STAM_REG(pVM, &pPGM->StatHandlersOutOfSync, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/OutOfSync", STAMUNIT_OCCURENCES, "Number of traps due to out-of-sync handled pages.");
1540 STAM_REG(pVM, &pPGM->StatHandlersPhysical, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Physical", STAMUNIT_OCCURENCES, "Number of traps due to physical access handlers.");
1541 STAM_REG(pVM, &pPGM->StatHandlersVirtual, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Virtual", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers.");
1542 STAM_REG(pVM, &pPGM->StatHandlersVirtualByPhys, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/VirtualByPhys", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers by physical address.");
1543 STAM_REG(pVM, &pPGM->StatHandlersVirtualUnmarked, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/VirtualUnmarked", STAMUNIT_OCCURENCES,"Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1544 STAM_REG(pVM, &pPGM->StatHandlersUnhandled, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Unhandled", STAMUNIT_OCCURENCES, "Number of traps due to access outside range of monitored page(s).");
1545 STAM_REG(pVM, &pPGM->StatHandlersInvalid, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Invalid", STAMUNIT_OCCURENCES, "Number of traps due to access to invalid physical memory.");
1546
1547 STAM_REG(pVM, &pPGM->StatGCTrap0eConflicts, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Conflicts", STAMUNIT_OCCURENCES, "The number of times #PF was caused by an undetected conflict.");
1548 STAM_REG(pVM, &pPGM->StatGCTrap0eUSNotPresentRead, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/NPRead", STAMUNIT_OCCURENCES, "Number of user mode not present read page faults.");
1549 STAM_REG(pVM, &pPGM->StatGCTrap0eUSNotPresentWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/NPWrite", STAMUNIT_OCCURENCES, "Number of user mode not present write page faults.");
1550 STAM_REG(pVM, &pPGM->StatGCTrap0eUSWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/Write", STAMUNIT_OCCURENCES, "Number of user mode write page faults.");
1551 STAM_REG(pVM, &pPGM->StatGCTrap0eUSReserved, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/Reserved", STAMUNIT_OCCURENCES, "Number of user mode reserved bit page faults.");
1552 STAM_REG(pVM, &pPGM->StatGCTrap0eUSNXE, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/NXE", STAMUNIT_OCCURENCES, "Number of user mode NXE page faults.");
1553 STAM_REG(pVM, &pPGM->StatGCTrap0eUSRead, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/Read", STAMUNIT_OCCURENCES, "Number of user mode read page faults.");
1554
1555 STAM_REG(pVM, &pPGM->StatGCTrap0eSVNotPresentRead, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/NPRead", STAMUNIT_OCCURENCES, "Number of supervisor mode not present read page faults.");
1556 STAM_REG(pVM, &pPGM->StatGCTrap0eSVNotPresentWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/NPWrite", STAMUNIT_OCCURENCES, "Number of supervisor mode not present write page faults.");
1557 STAM_REG(pVM, &pPGM->StatGCTrap0eSVWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/Write", STAMUNIT_OCCURENCES, "Number of supervisor mode write page faults.");
1558 STAM_REG(pVM, &pPGM->StatGCTrap0eSVReserved, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/Reserved", STAMUNIT_OCCURENCES, "Number of supervisor mode reserved bit page faults.");
1559 STAM_REG(pVM, &pPGM->StatGCTrap0eSNXE, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/NXE", STAMUNIT_OCCURENCES, "Number of supervisor mode NXE page faults.");
1560 STAM_REG(pVM, &pPGM->StatGCTrap0eUnhandled, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/GuestPF/Unhandled", STAMUNIT_OCCURENCES, "Number of guest real page faults.");
1561 STAM_REG(pVM, &pPGM->StatGCTrap0eMap, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/GuestPF/Map", STAMUNIT_OCCURENCES, "Number of guest page faults due to map accesses.");
1562
1563 STAM_REG(pVM, &pPGM->StatTrap0eWPEmulGC, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/WP/InGC", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation.");
1564 STAM_REG(pVM, &pPGM->StatTrap0eWPEmulR3, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/WP/ToR3", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1565
1566 STAM_REG(pVM, &pPGM->StatGCGuestCR3WriteHandled, STAMTYPE_COUNTER, "/PGM/GC/CR3WriteInt", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was successfully handled.");
1567 STAM_REG(pVM, &pPGM->StatGCGuestCR3WriteUnhandled, STAMTYPE_COUNTER, "/PGM/GC/CR3WriteEmu", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was passed back to the recompiler.");
1568 STAM_REG(pVM, &pPGM->StatGCGuestCR3WriteConflict, STAMTYPE_COUNTER, "/PGM/GC/CR3WriteConflict", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 monitoring detected a conflict.");
1569
1570 STAM_REG(pVM, &pPGM->StatGCPageOutOfSyncSupervisor, STAMTYPE_COUNTER, "/PGM/GC/OutOfSync/SuperVisor", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync.");
1571 STAM_REG(pVM, &pPGM->StatGCPageOutOfSyncUser, STAMTYPE_COUNTER, "/PGM/GC/OutOfSync/User", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync.");
1572
1573 STAM_REG(pVM, &pPGM->StatGCGuestROMWriteHandled, STAMTYPE_COUNTER, "/PGM/GC/ROMWriteInt", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was successfully handled.");
1574 STAM_REG(pVM, &pPGM->StatGCGuestROMWriteUnhandled, STAMTYPE_COUNTER, "/PGM/GC/ROMWriteEmu", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was passed back to the recompiler.");
1575
1576 STAM_REG(pVM, &pPGM->StatDynMapCacheHits, STAMTYPE_COUNTER, "/PGM/GC/DynMapCache/Hits" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache hits.");
1577 STAM_REG(pVM, &pPGM->StatDynMapCacheMisses, STAMTYPE_COUNTER, "/PGM/GC/DynMapCache/Misses" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache misses.");
1578
1579 STAM_REG(pVM, &pPGM->StatHCDetectedConflicts, STAMTYPE_COUNTER, "/PGM/HC/DetectedConflicts", STAMUNIT_OCCURENCES, "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1580 STAM_REG(pVM, &pPGM->StatHCGuestPDWrite, STAMTYPE_COUNTER, "/PGM/HC/PDWrite", STAMUNIT_OCCURENCES, "The total number of times pgmHCGuestPDWriteHandler() was called.");
1581 STAM_REG(pVM, &pPGM->StatHCGuestPDWriteConflict, STAMTYPE_COUNTER, "/PGM/HC/PDWriteConflict", STAMUNIT_OCCURENCES, "The number of times pgmHCGuestPDWriteHandler() detected a conflict.");
1582
1583 STAM_REG(pVM, &pPGM->StatHCInvalidatePage, STAMTYPE_PROFILE, "/PGM/HC/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMHCInvalidatePage() profiling.");
1584 STAM_REG(pVM, &pPGM->StatHCInvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a 4KB page.");
1585 STAM_REG(pVM, &pPGM->StatHCInvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a 4MB page.");
1586 STAM_REG(pVM, &pPGM->StatHCInvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() skipped a 4MB page.");
1587 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a page directory containing mappings (no conflict).");
1588 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a not accessed page directory.");
1589 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a not present page directory.");
1590 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for an out of sync page directory.");
1591 STAM_REG(pVM, &pPGM->StatHCInvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1592 STAM_REG(pVM, &pPGM->StatHCResolveConflict, STAMTYPE_PROFILE, "/PGM/HC/ResolveConflict", STAMUNIT_TICKS_PER_CALL, "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1593 STAM_REG(pVM, &pPGM->StatHCPrefetch, STAMTYPE_PROFILE, "/PGM/HC/Prefetch", STAMUNIT_TICKS_PER_CALL, "PGMR3PrefetchPage profiling.");
1594
1595 STAM_REG(pVM, &pPGM->StatHCSyncPT, STAMTYPE_PROFILE, "/PGM/HC/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMR3SyncPT() body.");
1596 STAM_REG(pVM, &pPGM->StatHCAccessedPage, STAMTYPE_COUNTER, "/PGM/HC/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1597 STAM_REG(pVM, &pPGM->StatHCDirtyPage, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1598 STAM_REG(pVM, &pPGM->StatHCDirtyPageBig, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1599 STAM_REG(pVM, &pPGM->StatHCDirtyPageTrap, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1600 STAM_REG(pVM, &pPGM->StatHCDirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1601 STAM_REG(pVM, &pPGM->StatHCDirtyBitTracking, STAMTYPE_PROFILE, "/PGM/HC/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMTrackDirtyBit() body.");
1602
1603 STAM_REG(pVM, &pPGM->StatGCSyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/GC/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1604 STAM_REG(pVM, &pPGM->StatGCSyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/GC/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1605 STAM_REG(pVM, &pPGM->StatHCSyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/HC/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1606 STAM_REG(pVM, &pPGM->StatHCSyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/HC/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1607
1608 STAM_REG(pVM, &pPGM->StatFlushTLB, STAMTYPE_PROFILE, "/PGM/FlushTLB", STAMUNIT_OCCURENCES, "Profiling of the PGMFlushTLB() body.");
1609 STAM_REG(pVM, &pPGM->StatFlushTLBNewCR3, STAMTYPE_COUNTER, "/PGM/FlushTLB/NewCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1610 STAM_REG(pVM, &pPGM->StatFlushTLBNewCR3Global, STAMTYPE_COUNTER, "/PGM/FlushTLB/NewCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1611 STAM_REG(pVM, &pPGM->StatFlushTLBSameCR3, STAMTYPE_COUNTER, "/PGM/FlushTLB/SameCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1612 STAM_REG(pVM, &pPGM->StatFlushTLBSameCR3Global, STAMTYPE_COUNTER, "/PGM/FlushTLB/SameCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1613
1614 STAM_REG(pVM, &pPGM->StatGCSyncCR3, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1615 STAM_REG(pVM, &pPGM->StatGCSyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1616 STAM_REG(pVM, &pPGM->StatGCSyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3/Handlers/VirtualUpdate",STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1617 STAM_REG(pVM, &pPGM->StatGCSyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1618 STAM_REG(pVM, &pPGM->StatGCSyncCR3Global, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1619 STAM_REG(pVM, &pPGM->StatGCSyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1620 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1621 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1622 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1623 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1624 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1625 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1626
1627 STAM_REG(pVM, &pPGM->StatHCSyncCR3, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1628 STAM_REG(pVM, &pPGM->StatHCSyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1629 STAM_REG(pVM, &pPGM->StatHCSyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3/Handlers/VirtualUpdate",STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1630 STAM_REG(pVM, &pPGM->StatHCSyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1631 STAM_REG(pVM, &pPGM->StatHCSyncCR3Global, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1632 STAM_REG(pVM, &pPGM->StatHCSyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1633 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1634 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1635 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1636 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1637 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1638 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1639
1640 STAM_REG(pVM, &pPGM->StatVirtHandleSearchByPhysGC, STAMTYPE_PROFILE, "/PGM/VirtHandler/SearchByPhys/GC", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr in GC.");
1641 STAM_REG(pVM, &pPGM->StatVirtHandleSearchByPhysHC, STAMTYPE_PROFILE, "/PGM/VirtHandler/SearchByPhys/HC", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr in HC.");
1642 STAM_REG(pVM, &pPGM->StatHandlePhysicalReset, STAMTYPE_COUNTER, "/PGM/HC/HandlerPhysicalReset", STAMUNIT_OCCURENCES, "The number of times PGMR3HandlerPhysicalReset is called.");
1643
1644 STAM_REG(pVM, &pPGM->StatHCGstModifyPage, STAMTYPE_PROFILE, "/PGM/HC/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1645 STAM_REG(pVM, &pPGM->StatGCGstModifyPage, STAMTYPE_PROFILE, "/PGM/GC/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1646
1647 STAM_REG(pVM, &pPGM->StatSynPT4kGC, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/4k", STAMUNIT_OCCURENCES, "Nr of 4k PT syncs");
1648 STAM_REG(pVM, &pPGM->StatSynPT4kHC, STAMTYPE_COUNTER, "/PGM/HC/SyncPT/4k", STAMUNIT_OCCURENCES, "Nr of 4k PT syncs");
1649 STAM_REG(pVM, &pPGM->StatSynPT4MGC, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1650 STAM_REG(pVM, &pPGM->StatSynPT4MHC, STAMTYPE_COUNTER, "/PGM/HC/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1651
1652 STAM_REG(pVM, &pPGM->StatDynRamTotal, STAMTYPE_COUNTER, "/PGM/RAM/TotalAlloc", STAMUNIT_MEGABYTES, "Allocated mbs of guest ram.");
1653 STAM_REG(pVM, &pPGM->StatDynRamGrow, STAMTYPE_COUNTER, "/PGM/RAM/Grow", STAMUNIT_OCCURENCES, "Nr of pgmr3PhysGrowRange calls.");
1654
1655 STAM_REG(pVM, &pPGM->StatPageHCMapTlbHits, STAMTYPE_COUNTER, "/PGM/PageHCMap/TlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1656 STAM_REG(pVM, &pPGM->StatPageHCMapTlbMisses, STAMTYPE_COUNTER, "/PGM/PageHCMap/TlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1657 STAM_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_OCCURENCES, "Number of mapped chunks.");
1658 STAM_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_OCCURENCES, "Maximum number of mapped chunks.");
1659 STAM_REG(pVM, &pPGM->StatChunkR3MapTlbHits, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1660 STAM_REG(pVM, &pPGM->StatChunkR3MapTlbMisses, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1661 STAM_REG(pVM, &pPGM->StatPageReplaceShared, STAMTYPE_COUNTER, "/PGM/Page/ReplacedShared", STAMUNIT_OCCURENCES, "Times a shared page was replaced.");
1662 STAM_REG(pVM, &pPGM->StatPageReplaceZero, STAMTYPE_COUNTER, "/PGM/Page/ReplacedZero", STAMUNIT_OCCURENCES, "Times the zero page was replaced.");
1663 STAM_REG(pVM, &pPGM->StatPageHandyAllocs, STAMTYPE_COUNTER, "/PGM/Page/HandyAllocs", STAMUNIT_OCCURENCES, "Number of times we've allocated more handy pages.");
1664 STAM_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_OCCURENCES, "The total number of pages.");
1665 STAM_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_OCCURENCES, "The number of private pages.");
1666 STAM_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_OCCURENCES, "The number of shared pages.");
1667 STAM_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_OCCURENCES, "The number of zero backed pages.");
1668
1669#ifdef PGMPOOL_WITH_GCPHYS_TRACKING
1670 STAM_REG(pVM, &pPGM->StatTrackVirgin, STAMTYPE_COUNTER, "/PGM/Track/Virgin", STAMUNIT_OCCURENCES, "The number of first time shadowings");
1671 STAM_REG(pVM, &pPGM->StatTrackAliased, STAMTYPE_COUNTER, "/PGM/Track/Aliased", STAMUNIT_OCCURENCES, "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1672 STAM_REG(pVM, &pPGM->StatTrackAliasedMany, STAMTYPE_COUNTER, "/PGM/Track/AliasedMany", STAMUNIT_OCCURENCES, "The number of times we're tracking using cRef2.");
1673 STAM_REG(pVM, &pPGM->StatTrackAliasedLots, STAMTYPE_COUNTER, "/PGM/Track/AliasedLots", STAMUNIT_OCCURENCES, "The number of times we're hitting pages which has overflowed cRef2");
1674 STAM_REG(pVM, &pPGM->StatTrackOverflows, STAMTYPE_COUNTER, "/PGM/Track/Overflows", STAMUNIT_OCCURENCES, "The number of times the extent list grows to long.");
1675 STAM_REG(pVM, &pPGM->StatTrackDeref, STAMTYPE_PROFILE, "/PGM/Track/Deref", STAMUNIT_OCCURENCES, "Profiling of SyncPageWorkerTrackDeref (expensive).");
1676#endif
1677
1678 for (unsigned i = 0; i < X86_PG_ENTRIES; i++)
1679 {
1680 /** @todo r=bird: We need a STAMR3RegisterF()! */
1681 char szName[32];
1682
1683 RTStrPrintf(szName, sizeof(szName), "/PGM/GC/PD/Trap0e/%04X", i);
1684 int rc = STAMR3Register(pVM, &pPGM->StatGCTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, szName, STAMUNIT_OCCURENCES, "The number of traps in page directory n.");
1685 AssertRC(rc);
1686
1687 RTStrPrintf(szName, sizeof(szName), "/PGM/GC/PD/SyncPt/%04X", i);
1688 rc = STAMR3Register(pVM, &pPGM->StatGCSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, szName, STAMUNIT_OCCURENCES, "The number of syncs per PD n.");
1689 AssertRC(rc);
1690
1691 RTStrPrintf(szName, sizeof(szName), "/PGM/GC/PD/SyncPage/%04X", i);
1692 rc = STAMR3Register(pVM, &pPGM->StatGCSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, szName, STAMUNIT_OCCURENCES, "The number of out of sync pages per page directory n.");
1693 AssertRC(rc);
1694 }
1695}
1696#endif /* VBOX_WITH_STATISTICS */
1697
1698/**
1699 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
1700 *
1701 * The dynamic mapping area will also be allocated and initialized at this
1702 * time. We could allocate it during PGMR3Init of course, but the mapping
1703 * wouldn't be allocated at that time preventing us from setting up the
1704 * page table entries with the dummy page.
1705 *
1706 * @returns VBox status code.
1707 * @param pVM VM handle.
1708 */
1709PGMR3DECL(int) PGMR3InitDynMap(PVM pVM)
1710{
1711 RTGCPTR GCPtr;
1712 /*
1713 * Reserve space for mapping the paging pages into guest context.
1714 */
1715 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * (2 + RT_ELEMENTS(pVM->pgm.s.apHCPaePDs) + 1 + 2 + 2), "Paging", &GCPtr);
1716 AssertRCReturn(rc, rc);
1717 pVM->pgm.s.pGC32BitPD = GCPtr;
1718 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1719
1720 /*
1721 * Reserve space for the dynamic mappings.
1722 */
1723 /** @todo r=bird: Need to verify that the checks for crossing PTs are correct here. They seems to be assuming 4MB PTs.. */
1724 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &GCPtr);
1725 if (VBOX_SUCCESS(rc))
1726 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1727
1728 if ( VBOX_SUCCESS(rc)
1729 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_SHIFT))
1730 {
1731 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &GCPtr);
1732 if (VBOX_SUCCESS(rc))
1733 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1734 }
1735 if (VBOX_SUCCESS(rc))
1736 {
1737 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_SHIFT));
1738 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1739 }
1740 return rc;
1741}
1742
1743
1744/**
1745 * Ring-3 init finalizing.
1746 *
1747 * @returns VBox status code.
1748 * @param pVM The VM handle.
1749 */
1750PGMR3DECL(int) PGMR3InitFinalize(PVM pVM)
1751{
1752 /*
1753 * Map the paging pages into the guest context.
1754 */
1755 RTGCPTR GCPtr = pVM->pgm.s.pGC32BitPD;
1756 AssertReleaseReturn(GCPtr, VERR_INTERNAL_ERROR);
1757
1758 int rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhys32BitPD, PAGE_SIZE, 0);
1759 AssertRCReturn(rc, rc);
1760 pVM->pgm.s.pGC32BitPD = GCPtr;
1761 GCPtr += PAGE_SIZE;
1762 GCPtr += PAGE_SIZE; /* reserved page */
1763
1764 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apHCPaePDs); i++)
1765 {
1766 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.aHCPhysPaePDs[i], PAGE_SIZE, 0);
1767 AssertRCReturn(rc, rc);
1768 pVM->pgm.s.apGCPaePDs[i] = GCPtr;
1769 GCPtr += PAGE_SIZE;
1770 }
1771 /* A bit of paranoia is justified. */
1772 AssertRelease((RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[0] + PAGE_SIZE == (RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[1]);
1773 AssertRelease((RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[1] + PAGE_SIZE == (RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[2]);
1774 AssertRelease((RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[2] + PAGE_SIZE == (RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[3]);
1775 GCPtr += PAGE_SIZE; /* reserved page */
1776
1777 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysPaePDPT, PAGE_SIZE, 0);
1778 AssertRCReturn(rc, rc);
1779 pVM->pgm.s.pGCPaePDPT = GCPtr;
1780 GCPtr += PAGE_SIZE;
1781 GCPtr += PAGE_SIZE; /* reserved page */
1782
1783
1784 /*
1785 * Reserve space for the dynamic mappings.
1786 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
1787 */
1788 /* get the pointer to the page table entries. */
1789 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
1790 AssertRelease(pMapping);
1791 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
1792 const unsigned iPT = off >> X86_PD_SHIFT;
1793 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
1794 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTGC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
1795 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsGC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
1796
1797 /* init cache */
1798 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
1799 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aHCPhysDynPageMapCache); i++)
1800 pVM->pgm.s.aHCPhysDynPageMapCache[i] = HCPhysDummy;
1801
1802 for (unsigned i = 0; i < MM_HYPER_DYNAMIC_SIZE; i += PAGE_SIZE)
1803 {
1804 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + i, HCPhysDummy, PAGE_SIZE, 0);
1805 AssertRCReturn(rc, rc);
1806 }
1807
1808 /* Note that AMD uses all the 8 reserved bits for the address (so 40 bits in total); Intel only goes up to 36 bits, so
1809 * we stick to 36 as well.
1810 *
1811 * @todo How to test for the 40 bits support? Long mode seems to be the test criterium.
1812 */
1813 uint32_t u32Dummy, u32Features;
1814 CPUMGetGuestCpuId(pVM, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
1815
1816 if (u32Features & X86_CPUID_FEATURE_EDX_PSE36)
1817 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(36) - 1;
1818 else
1819 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1;
1820
1821 LogRel(("PGMR3InitFinalize: 4 MB PSE mask %VGp\n", pVM->pgm.s.GCPhys4MBPSEMask));
1822
1823 return rc;
1824}
1825
1826
1827/**
1828 * Applies relocations to data and code managed by this
1829 * component. This function will be called at init and
1830 * whenever the VMM need to relocate it self inside the GC.
1831 *
1832 * @param pVM The VM.
1833 * @param offDelta Relocation delta relative to old location.
1834 */
1835PGMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
1836{
1837 LogFlow(("PGMR3Relocate\n"));
1838
1839 /*
1840 * Paging stuff.
1841 */
1842 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
1843 /** @todo move this into shadow and guest specific relocation functions. */
1844 AssertMsg(pVM->pgm.s.pGC32BitPD, ("Init order, no relocation before paging is initialized!\n"));
1845 pVM->pgm.s.pGC32BitPD += offDelta;
1846 pVM->pgm.s.pGuestPDGC += offDelta;
1847 AssertCompile(RT_ELEMENTS(pVM->pgm.s.apGCPaePDs) == RT_ELEMENTS(pVM->pgm.s.apGstPaePDsGC));
1848 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apGCPaePDs); i++)
1849 {
1850 pVM->pgm.s.apGCPaePDs[i] += offDelta;
1851 pVM->pgm.s.apGstPaePDsGC[i] += offDelta;
1852 }
1853 pVM->pgm.s.pGstPaePDPTGC += offDelta;
1854 pVM->pgm.s.pGCPaePDPT += offDelta;
1855
1856 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
1857 pgmR3ModeDataSwitch(pVM, pVM->pgm.s.enmShadowMode, pVM->pgm.s.enmGuestMode);
1858
1859 PGM_SHW_PFN(Relocate, pVM)(pVM, offDelta);
1860 PGM_GST_PFN(Relocate, pVM)(pVM, offDelta);
1861 PGM_BTH_PFN(Relocate, pVM)(pVM, offDelta);
1862
1863 /*
1864 * Trees.
1865 */
1866 pVM->pgm.s.pTreesGC = MMHyperHC2GC(pVM, pVM->pgm.s.pTreesHC);
1867
1868 /*
1869 * Ram ranges.
1870 */
1871 if (pVM->pgm.s.pRamRangesR3)
1872 {
1873 pVM->pgm.s.pRamRangesGC = MMHyperHC2GC(pVM, pVM->pgm.s.pRamRangesR3);
1874 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur->pNextR3; pCur = pCur->pNextR3)
1875#ifdef VBOX_WITH_NEW_PHYS_CODE
1876 pCur->pNextGC = MMHyperR3ToRC(pVM, pCur->pNextR3);
1877#else
1878 {
1879 pCur->pNextGC = MMHyperR3ToRC(pVM, pCur->pNextR3);
1880 if (pCur->pavHCChunkGC)
1881 pCur->pavHCChunkGC = MMHyperHC2GC(pVM, pCur->pavHCChunkHC);
1882 }
1883#endif
1884 }
1885
1886 /*
1887 * Update the two page directories with all page table mappings.
1888 * (One or more of them have changed, that's why we're here.)
1889 */
1890 pVM->pgm.s.pMappingsGC = MMHyperHC2GC(pVM, pVM->pgm.s.pMappingsR3);
1891 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
1892 pCur->pNextGC = MMHyperHC2GC(pVM, pCur->pNextR3);
1893
1894 /* Relocate GC addresses of Page Tables. */
1895 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
1896 {
1897 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
1898 {
1899 pCur->aPTs[i].pPTGC = MMHyperR3ToRC(pVM, pCur->aPTs[i].pPTR3);
1900 pCur->aPTs[i].paPaePTsGC = MMHyperR3ToRC(pVM, pCur->aPTs[i].paPaePTsR3);
1901 }
1902 }
1903
1904 /*
1905 * Dynamic page mapping area.
1906 */
1907 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
1908 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
1909 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
1910
1911 /*
1912 * The Zero page.
1913 */
1914 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1915 AssertRelease(pVM->pgm.s.pvZeroPgR0);
1916
1917 /*
1918 * Physical and virtual handlers.
1919 */
1920 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysHandlers, true, pgmR3RelocatePhysHandler, &offDelta);
1921 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesHC->VirtHandlers, true, pgmR3RelocateVirtHandler, &offDelta);
1922 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesHC->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &offDelta);
1923
1924 /*
1925 * The page pool.
1926 */
1927 pgmR3PoolRelocate(pVM);
1928}
1929
1930
1931/**
1932 * Callback function for relocating a physical access handler.
1933 *
1934 * @returns 0 (continue enum)
1935 * @param pNode Pointer to a PGMPHYSHANDLER node.
1936 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
1937 * not certain the delta will fit in a void pointer for all possible configs.
1938 */
1939static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
1940{
1941 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
1942 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
1943 if (pHandler->pfnHandlerGC)
1944 pHandler->pfnHandlerGC += offDelta;
1945 if ((RTGCUINTPTR)pHandler->pvUserGC >= 0x10000)
1946 pHandler->pvUserGC += offDelta;
1947 return 0;
1948}
1949
1950
1951/**
1952 * Callback function for relocating a virtual access handler.
1953 *
1954 * @returns 0 (continue enum)
1955 * @param pNode Pointer to a PGMVIRTHANDLER node.
1956 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
1957 * not certain the delta will fit in a void pointer for all possible configs.
1958 */
1959static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
1960{
1961 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
1962 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
1963 Assert( pHandler->enmType == PGMVIRTHANDLERTYPE_ALL
1964 || pHandler->enmType == PGMVIRTHANDLERTYPE_WRITE);
1965 Assert(pHandler->pfnHandlerGC);
1966 pHandler->pfnHandlerGC += offDelta;
1967 return 0;
1968}
1969
1970
1971/**
1972 * Callback function for relocating a virtual access handler for the hypervisor mapping.
1973 *
1974 * @returns 0 (continue enum)
1975 * @param pNode Pointer to a PGMVIRTHANDLER node.
1976 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
1977 * not certain the delta will fit in a void pointer for all possible configs.
1978 */
1979static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
1980{
1981 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
1982 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
1983 Assert(pHandler->enmType == PGMVIRTHANDLERTYPE_HYPERVISOR);
1984 Assert(pHandler->pfnHandlerGC);
1985 pHandler->pfnHandlerGC += offDelta;
1986 return 0;
1987}
1988
1989
1990/**
1991 * The VM is being reset.
1992 *
1993 * For the PGM component this means that any PD write monitors
1994 * needs to be removed.
1995 *
1996 * @param pVM VM handle.
1997 */
1998PGMR3DECL(void) PGMR3Reset(PVM pVM)
1999{
2000 LogFlow(("PGMR3Reset:\n"));
2001 VM_ASSERT_EMT(pVM);
2002
2003 pgmLock(pVM);
2004
2005 /*
2006 * Unfix any fixed mappings and disable CR3 monitoring.
2007 */
2008 pVM->pgm.s.fMappingsFixed = false;
2009 pVM->pgm.s.GCPtrMappingFixed = 0;
2010 pVM->pgm.s.cbMappingFixed = 0;
2011
2012 /* Exit the guest paging mode before the pgm pool gets reset.
2013 * Important to clean up the amd64 case.
2014 */
2015 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
2016 AssertRC(rc);
2017#ifdef DEBUG
2018 DBGFR3InfoLog(pVM, "mappings", NULL);
2019 DBGFR3InfoLog(pVM, "handlers", "all nostat");
2020#endif
2021
2022 /*
2023 * Reset the shadow page pool.
2024 */
2025 pgmR3PoolReset(pVM);
2026
2027 /*
2028 * Re-init other members.
2029 */
2030 pVM->pgm.s.fA20Enabled = true;
2031
2032 /*
2033 * Clear the FFs PGM owns.
2034 */
2035 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3);
2036 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2037
2038 /*
2039 * Reset (zero) RAM pages.
2040 */
2041 rc = pgmR3PhysRamReset(pVM);
2042 if (RT_SUCCESS(rc))
2043 {
2044#ifdef VBOX_WITH_NEW_PHYS_CODE
2045 /*
2046 * Reset (zero) shadow ROM pages.
2047 */
2048 rc = pgmR3PhysRomReset(pVM);
2049#endif
2050 if (RT_SUCCESS(rc))
2051 {
2052 /*
2053 * Switch mode back to real mode.
2054 */
2055 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
2056 STAM_REL_COUNTER_RESET(&pVM->pgm.s.cGuestModeChanges);
2057 }
2058 }
2059
2060 pgmUnlock(pVM);
2061 //return rc;
2062 AssertReleaseRC(rc);
2063}
2064
2065
2066#ifdef VBOX_STRICT
2067/**
2068 * VM state change callback for clearing fNoMorePhysWrites after
2069 * a snapshot has been created.
2070 */
2071static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
2072{
2073 if (enmState == VMSTATE_RUNNING)
2074 pVM->pgm.s.fNoMorePhysWrites = false;
2075}
2076#endif
2077
2078
2079/**
2080 * Terminates the PGM.
2081 *
2082 * @returns VBox status code.
2083 * @param pVM Pointer to VM structure.
2084 */
2085PGMR3DECL(int) PGMR3Term(PVM pVM)
2086{
2087 return PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
2088}
2089
2090
2091/**
2092 * Execute state save operation.
2093 *
2094 * @returns VBox status code.
2095 * @param pVM VM Handle.
2096 * @param pSSM SSM operation handle.
2097 */
2098static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM)
2099{
2100 PPGM pPGM = &pVM->pgm.s;
2101
2102 /* No more writes to physical memory after this point! */
2103 pVM->pgm.s.fNoMorePhysWrites = true;
2104
2105 /*
2106 * Save basic data (required / unaffected by relocation).
2107 */
2108#if 1
2109 SSMR3PutBool(pSSM, pPGM->fMappingsFixed);
2110#else
2111 SSMR3PutUInt(pSSM, pPGM->fMappingsFixed);
2112#endif
2113 SSMR3PutGCPtr(pSSM, pPGM->GCPtrMappingFixed);
2114 SSMR3PutU32(pSSM, pPGM->cbMappingFixed);
2115 SSMR3PutUInt(pSSM, pPGM->cbRamSize);
2116 SSMR3PutGCPhys(pSSM, pPGM->GCPhysA20Mask);
2117 SSMR3PutUInt(pSSM, pPGM->fA20Enabled);
2118 SSMR3PutUInt(pSSM, pPGM->fSyncFlags);
2119 SSMR3PutUInt(pSSM, pPGM->enmGuestMode);
2120 SSMR3PutU32(pSSM, ~0); /* Separator. */
2121
2122 /*
2123 * The guest mappings.
2124 */
2125 uint32_t i = 0;
2126 for (PPGMMAPPING pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3, i++)
2127 {
2128 SSMR3PutU32(pSSM, i);
2129 SSMR3PutStrZ(pSSM, pMapping->pszDesc); /* This is the best unique id we have... */
2130 SSMR3PutGCPtr(pSSM, pMapping->GCPtr);
2131 SSMR3PutGCUIntPtr(pSSM, pMapping->cPTs);
2132 /* flags are done by the mapping owners! */
2133 }
2134 SSMR3PutU32(pSSM, ~0); /* terminator. */
2135
2136 /*
2137 * Ram range flags and bits.
2138 */
2139 i = 0;
2140 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2141 {
2142 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2143
2144 SSMR3PutU32(pSSM, i);
2145 SSMR3PutGCPhys(pSSM, pRam->GCPhys);
2146 SSMR3PutGCPhys(pSSM, pRam->GCPhysLast);
2147 SSMR3PutGCPhys(pSSM, pRam->cb);
2148 SSMR3PutU8(pSSM, !!pRam->pvHC); /* boolean indicating memory or not. */
2149
2150 /* Flags. */
2151 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2152 for (unsigned iPage = 0; iPage < cPages; iPage++)
2153 SSMR3PutU16(pSSM, (uint16_t)(pRam->aPages[iPage].HCPhys & ~X86_PTE_PAE_PG_MASK)); /** @todo PAGE FLAGS */
2154
2155 /* any memory associated with the range. */
2156 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2157 {
2158 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2159 {
2160 if (pRam->pavHCChunkHC[iChunk])
2161 {
2162 SSMR3PutU8(pSSM, 1); /* chunk present */
2163 SSMR3PutMem(pSSM, pRam->pavHCChunkHC[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2164 }
2165 else
2166 SSMR3PutU8(pSSM, 0); /* no chunk present */
2167 }
2168 }
2169 else if (pRam->pvHC)
2170 {
2171 int rc = SSMR3PutMem(pSSM, pRam->pvHC, pRam->cb);
2172 if (VBOX_FAILURE(rc))
2173 {
2174 Log(("pgmR3Save: SSMR3PutMem(, %p, %#x) -> %Vrc\n", pRam->pvHC, pRam->cb, rc));
2175 return rc;
2176 }
2177 }
2178 }
2179 return SSMR3PutU32(pSSM, ~0); /* terminator. */
2180}
2181
2182
2183/**
2184 * Execute state load operation.
2185 *
2186 * @returns VBox status code.
2187 * @param pVM VM Handle.
2188 * @param pSSM SSM operation handle.
2189 * @param u32Version Data layout version.
2190 */
2191static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
2192{
2193 /*
2194 * Validate version.
2195 */
2196 if (u32Version != PGM_SAVED_STATE_VERSION)
2197 {
2198 AssertMsgFailed(("pgmR3Load: Invalid version u32Version=%d (current %d)!\n", u32Version, PGM_SAVED_STATE_VERSION));
2199 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2200 }
2201
2202 /*
2203 * Call the reset function to make sure all the memory is cleared.
2204 */
2205 PGMR3Reset(pVM);
2206
2207 /*
2208 * Load basic data (required / unaffected by relocation).
2209 */
2210 PPGM pPGM = &pVM->pgm.s;
2211#if 1
2212 SSMR3GetBool(pSSM, &pPGM->fMappingsFixed);
2213#else
2214 uint32_t u;
2215 SSMR3GetU32(pSSM, &u);
2216 pPGM->fMappingsFixed = u;
2217#endif
2218 SSMR3GetGCPtr(pSSM, &pPGM->GCPtrMappingFixed);
2219 SSMR3GetU32(pSSM, &pPGM->cbMappingFixed);
2220
2221 RTUINT cbRamSize;
2222 int rc = SSMR3GetU32(pSSM, &cbRamSize);
2223 if (VBOX_FAILURE(rc))
2224 return rc;
2225 if (cbRamSize != pPGM->cbRamSize)
2226 return VERR_SSM_LOAD_MEMORY_SIZE_MISMATCH;
2227 SSMR3GetGCPhys(pSSM, &pPGM->GCPhysA20Mask);
2228 SSMR3GetUInt(pSSM, &pPGM->fA20Enabled);
2229 SSMR3GetUInt(pSSM, &pPGM->fSyncFlags);
2230 RTUINT uGuestMode;
2231 SSMR3GetUInt(pSSM, &uGuestMode);
2232 pPGM->enmGuestMode = (PGMMODE)uGuestMode;
2233
2234 /* check separator. */
2235 uint32_t u32Sep;
2236 SSMR3GetU32(pSSM, &u32Sep);
2237 if (VBOX_FAILURE(rc))
2238 return rc;
2239 if (u32Sep != (uint32_t)~0)
2240 {
2241 AssertMsgFailed(("u32Sep=%#x (first)\n", u32Sep));
2242 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2243 }
2244
2245 /*
2246 * The guest mappings.
2247 */
2248 uint32_t i = 0;
2249 for (;; i++)
2250 {
2251 /* Check the seqence number / separator. */
2252 rc = SSMR3GetU32(pSSM, &u32Sep);
2253 if (VBOX_FAILURE(rc))
2254 return rc;
2255 if (u32Sep == ~0U)
2256 break;
2257 if (u32Sep != i)
2258 {
2259 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2260 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2261 }
2262
2263 /* get the mapping details. */
2264 char szDesc[256];
2265 szDesc[0] = '\0';
2266 rc = SSMR3GetStrZ(pSSM, szDesc, sizeof(szDesc));
2267 if (VBOX_FAILURE(rc))
2268 return rc;
2269 RTGCPTR GCPtr;
2270 SSMR3GetGCPtr(pSSM, &GCPtr);
2271 RTGCUINTPTR cPTs;
2272 rc = SSMR3GetGCUIntPtr(pSSM, &cPTs);
2273 if (VBOX_FAILURE(rc))
2274 return rc;
2275
2276 /* find matching range. */
2277 PPGMMAPPING pMapping;
2278 for (pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3)
2279 if ( pMapping->cPTs == cPTs
2280 && !strcmp(pMapping->pszDesc, szDesc))
2281 break;
2282 if (!pMapping)
2283 {
2284 LogRel(("Couldn't find mapping: cPTs=%#x szDesc=%s (GCPtr=%VGv)\n",
2285 cPTs, szDesc, GCPtr));
2286 AssertFailed();
2287 return VERR_SSM_LOAD_CONFIG_MISMATCH;
2288 }
2289
2290 /* relocate it. */
2291 if (pMapping->GCPtr != GCPtr)
2292 {
2293 AssertMsg((GCPtr >> X86_PD_SHIFT << X86_PD_SHIFT) == GCPtr, ("GCPtr=%VGv\n", GCPtr));
2294#if HC_ARCH_BITS == 64
2295LogRel(("Mapping: %VGv -> %VGv %s\n", pMapping->GCPtr, GCPtr, pMapping->pszDesc));
2296#endif
2297 pgmR3MapRelocate(pVM, pMapping, pMapping->GCPtr, GCPtr);
2298 }
2299 else
2300 Log(("pgmR3Load: '%s' needed no relocation (%VGv)\n", szDesc, GCPtr));
2301 }
2302
2303 /*
2304 * Ram range flags and bits.
2305 */
2306 i = 0;
2307 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2308 {
2309 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2310 /* Check the seqence number / separator. */
2311 rc = SSMR3GetU32(pSSM, &u32Sep);
2312 if (VBOX_FAILURE(rc))
2313 return rc;
2314 if (u32Sep == ~0U)
2315 break;
2316 if (u32Sep != i)
2317 {
2318 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2319 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2320 }
2321
2322 /* Get the range details. */
2323 RTGCPHYS GCPhys;
2324 SSMR3GetGCPhys(pSSM, &GCPhys);
2325 RTGCPHYS GCPhysLast;
2326 SSMR3GetGCPhys(pSSM, &GCPhysLast);
2327 RTGCPHYS cb;
2328 SSMR3GetGCPhys(pSSM, &cb);
2329 uint8_t fHaveBits;
2330 rc = SSMR3GetU8(pSSM, &fHaveBits);
2331 if (VBOX_FAILURE(rc))
2332 return rc;
2333 if (fHaveBits & ~1)
2334 {
2335 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2336 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2337 }
2338
2339 /* Match it up with the current range. */
2340 if ( GCPhys != pRam->GCPhys
2341 || GCPhysLast != pRam->GCPhysLast
2342 || cb != pRam->cb
2343 || fHaveBits != !!pRam->pvHC)
2344 {
2345 LogRel(("Ram range: %VGp-%VGp %VGp bytes %s\n"
2346 "State : %VGp-%VGp %VGp bytes %s\n",
2347 pRam->GCPhys, pRam->GCPhysLast, pRam->cb, pRam->pvHC ? "bits" : "nobits",
2348 GCPhys, GCPhysLast, cb, fHaveBits ? "bits" : "nobits"));
2349 /*
2350 * If we're loading a state for debugging purpose, don't make a fuss if
2351 * the MMIO[2] and ROM stuff isn't 100% right, just skip the mismatches.
2352 */
2353 if ( SSMR3HandleGetAfter(pSSM) != SSMAFTER_DEBUG_IT
2354 || GCPhys < 8 * _1M)
2355 AssertFailedReturn(VERR_SSM_LOAD_CONFIG_MISMATCH);
2356
2357 RTGCPHYS cPages = ((GCPhysLast - GCPhys) + 1) >> PAGE_SHIFT;
2358 while (cPages-- > 0)
2359 {
2360 uint16_t u16Ignore;
2361 SSMR3GetU16(pSSM, &u16Ignore);
2362 }
2363 continue;
2364 }
2365
2366 /* Flags. */
2367 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2368 for (unsigned iPage = 0; iPage < cPages; iPage++)
2369 {
2370 uint16_t u16 = 0;
2371 SSMR3GetU16(pSSM, &u16);
2372 u16 &= PAGE_OFFSET_MASK & ~( RT_BIT(4) | RT_BIT(5) | RT_BIT(6)
2373 | RT_BIT(7) | RT_BIT(8) | RT_BIT(9) | RT_BIT(10) );
2374 // &= MM_RAM_FLAGS_DYNAMIC_ALLOC | MM_RAM_FLAGS_RESERVED | MM_RAM_FLAGS_ROM | MM_RAM_FLAGS_MMIO | MM_RAM_FLAGS_MMIO2
2375 pRam->aPages[iPage].HCPhys = PGM_PAGE_GET_HCPHYS(&pRam->aPages[iPage]) | (RTHCPHYS)u16; /** @todo PAGE FLAGS */
2376 }
2377
2378 /* any memory associated with the range. */
2379 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2380 {
2381 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2382 {
2383 uint8_t fValidChunk;
2384
2385 rc = SSMR3GetU8(pSSM, &fValidChunk);
2386 if (VBOX_FAILURE(rc))
2387 return rc;
2388 if (fValidChunk > 1)
2389 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2390
2391 if (fValidChunk)
2392 {
2393 if (!pRam->pavHCChunkHC[iChunk])
2394 {
2395 rc = pgmr3PhysGrowRange(pVM, pRam->GCPhys + iChunk * PGM_DYNAMIC_CHUNK_SIZE);
2396 if (VBOX_FAILURE(rc))
2397 return rc;
2398 }
2399 Assert(pRam->pavHCChunkHC[iChunk]);
2400
2401 SSMR3GetMem(pSSM, pRam->pavHCChunkHC[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2402 }
2403 /* else nothing to do */
2404 }
2405 }
2406 else if (pRam->pvHC)
2407 {
2408 int rc = SSMR3GetMem(pSSM, pRam->pvHC, pRam->cb);
2409 if (VBOX_FAILURE(rc))
2410 {
2411 Log(("pgmR3Save: SSMR3GetMem(, %p, %#x) -> %Vrc\n", pRam->pvHC, pRam->cb, rc));
2412 return rc;
2413 }
2414 }
2415 }
2416
2417 /*
2418 * We require a full resync now.
2419 */
2420 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2421 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
2422 pPGM->fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
2423 pPGM->fPhysCacheFlushPending = true;
2424 pgmR3HandlerPhysicalUpdateAll(pVM);
2425
2426 /*
2427 * Change the paging mode.
2428 */
2429 rc = PGMR3ChangeMode(pVM, pPGM->enmGuestMode);
2430
2431 /* Restore pVM->pgm.s.GCPhysCR3. */
2432 Assert(pVM->pgm.s.GCPhysCR3 == NIL_RTGCPHYS);
2433 RTGCPHYS GCPhysCR3 = CPUMGetGuestCR3(pVM);
2434 if ( pVM->pgm.s.enmGuestMode == PGMMODE_PAE
2435 || pVM->pgm.s.enmGuestMode == PGMMODE_PAE_NX
2436 || pVM->pgm.s.enmGuestMode == PGMMODE_AMD64
2437 || pVM->pgm.s.enmGuestMode == PGMMODE_AMD64_NX)
2438 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAE_PAGE_MASK);
2439 else
2440 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAGE_MASK);
2441 pVM->pgm.s.GCPhysCR3 = GCPhysCR3;
2442
2443 return rc;
2444}
2445
2446
2447/**
2448 * Show paging mode.
2449 *
2450 * @param pVM VM Handle.
2451 * @param pHlp The info helpers.
2452 * @param pszArgs "all" (default), "guest", "shadow" or "host".
2453 */
2454static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2455{
2456 /* digest argument. */
2457 bool fGuest, fShadow, fHost;
2458 if (pszArgs)
2459 pszArgs = RTStrStripL(pszArgs);
2460 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
2461 fShadow = fHost = fGuest = true;
2462 else
2463 {
2464 fShadow = fHost = fGuest = false;
2465 if (strstr(pszArgs, "guest"))
2466 fGuest = true;
2467 if (strstr(pszArgs, "shadow"))
2468 fShadow = true;
2469 if (strstr(pszArgs, "host"))
2470 fHost = true;
2471 }
2472
2473 /* print info. */
2474 if (fGuest)
2475 pHlp->pfnPrintf(pHlp, "Guest paging mode: %s, changed %RU64 times, A20 %s\n",
2476 PGMGetModeName(pVM->pgm.s.enmGuestMode), pVM->pgm.s.cGuestModeChanges.c,
2477 pVM->pgm.s.fA20Enabled ? "enabled" : "disabled");
2478 if (fShadow)
2479 pHlp->pfnPrintf(pHlp, "Shadow paging mode: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode));
2480 if (fHost)
2481 {
2482 const char *psz;
2483 switch (pVM->pgm.s.enmHostMode)
2484 {
2485 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
2486 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
2487 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
2488 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
2489 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
2490 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
2491 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
2492 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
2493 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
2494 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
2495 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
2496 default: psz = "unknown"; break;
2497 }
2498 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
2499 }
2500}
2501
2502
2503/**
2504 * Dump registered MMIO ranges to the log.
2505 *
2506 * @param pVM VM Handle.
2507 * @param pHlp The info helpers.
2508 * @param pszArgs Arguments, ignored.
2509 */
2510static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2511{
2512 NOREF(pszArgs);
2513 pHlp->pfnPrintf(pHlp,
2514 "RAM ranges (pVM=%p)\n"
2515 "%.*s %.*s\n",
2516 pVM,
2517 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
2518 sizeof(RTHCPTR) * 2, "pvHC ");
2519
2520 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur; pCur = pCur->pNextR3)
2521 pHlp->pfnPrintf(pHlp,
2522 "%RGp-%RGp %RHv %s\n",
2523 pCur->GCPhys,
2524 pCur->GCPhysLast,
2525 pCur->pvHC,
2526 pCur->pszDesc);
2527}
2528
2529/**
2530 * Dump the page directory to the log.
2531 *
2532 * @param pVM VM Handle.
2533 * @param pHlp The info helpers.
2534 * @param pszArgs Arguments, ignored.
2535 */
2536static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2537{
2538/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
2539 /* Big pages supported? */
2540 const bool fPSE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PSE);
2541
2542 /* Global pages supported? */
2543 const bool fPGE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PGE);
2544
2545 NOREF(pszArgs);
2546
2547 /*
2548 * Get page directory addresses.
2549 */
2550 PX86PD pPDSrc = pVM->pgm.s.pGuestPDHC;
2551 Assert(pPDSrc);
2552 Assert(MMPhysGCPhys2HCVirt(pVM, (RTGCPHYS)(CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK), sizeof(*pPDSrc)) == pPDSrc);
2553
2554 /*
2555 * Iterate the page directory.
2556 */
2557 for (unsigned iPD = 0; iPD < RT_ELEMENTS(pPDSrc->a); iPD++)
2558 {
2559 X86PDE PdeSrc = pPDSrc->a[iPD];
2560 if (PdeSrc.n.u1Present)
2561 {
2562 if (PdeSrc.b.u1Size && fPSE)
2563 {
2564 pHlp->pfnPrintf(pHlp,
2565 "%04X - %VGp P=%d U=%d RW=%d G=%d - BIG\n",
2566 iPD,
2567 pgmGstGet4MBPhysPage(&pVM->pgm.s, PdeSrc),
2568 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
2569 }
2570 else
2571 {
2572 pHlp->pfnPrintf(pHlp,
2573 "%04X - %VGp P=%d U=%d RW=%d [G=%d]\n",
2574 iPD,
2575 PdeSrc.u & X86_PDE_PG_MASK,
2576 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
2577 }
2578 }
2579 }
2580}
2581
2582
2583/**
2584 * Serivce a VMMCALLHOST_PGM_LOCK call.
2585 *
2586 * @returns VBox status code.
2587 * @param pVM The VM handle.
2588 */
2589PDMR3DECL(int) PGMR3LockCall(PVM pVM)
2590{
2591 int rc = PDMR3CritSectEnterEx(&pVM->pgm.s.CritSect, true /* fHostCall */);
2592 AssertRC(rc);
2593 return rc;
2594}
2595
2596
2597/**
2598 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
2599 *
2600 * @returns PGM_TYPE_*.
2601 * @param pgmMode The mode value to convert.
2602 */
2603DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
2604{
2605 switch (pgmMode)
2606 {
2607 case PGMMODE_REAL: return PGM_TYPE_REAL;
2608 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
2609 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
2610 case PGMMODE_PAE:
2611 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
2612 case PGMMODE_AMD64:
2613 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
2614 case PGMMODE_NESTED: return PGM_TYPE_NESTED;
2615 case PGMMODE_EPT: return PGM_TYPE_EPT;
2616 default:
2617 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
2618 }
2619}
2620
2621
2622/**
2623 * Gets the index into the paging mode data array of a SHW+GST mode.
2624 *
2625 * @returns PGM::paPagingData index.
2626 * @param uShwType The shadow paging mode type.
2627 * @param uGstType The guest paging mode type.
2628 */
2629DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
2630{
2631 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_NESTED);
2632 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
2633 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_REAL + 1)
2634 + (uGstType - PGM_TYPE_REAL);
2635}
2636
2637
2638/**
2639 * Gets the index into the paging mode data array of a SHW+GST mode.
2640 *
2641 * @returns PGM::paPagingData index.
2642 * @param enmShw The shadow paging mode.
2643 * @param enmGst The guest paging mode.
2644 */
2645DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
2646{
2647 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
2648 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
2649 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
2650}
2651
2652
2653/**
2654 * Calculates the max data index.
2655 * @returns The number of entries in the paging data array.
2656 */
2657DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
2658{
2659 return pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64) + 1;
2660}
2661
2662
2663/**
2664 * Initializes the paging mode data kept in PGM::paModeData.
2665 *
2666 * @param pVM The VM handle.
2667 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
2668 * This is used early in the init process to avoid trouble with PDM
2669 * not being initialized yet.
2670 */
2671static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
2672{
2673 PPGMMODEDATA pModeData;
2674 int rc;
2675
2676 /*
2677 * Allocate the array on the first call.
2678 */
2679 if (!pVM->pgm.s.paModeData)
2680 {
2681 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
2682 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
2683 }
2684
2685 /*
2686 * Initialize the array entries.
2687 */
2688 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
2689 pModeData->uShwType = PGM_TYPE_32BIT;
2690 pModeData->uGstType = PGM_TYPE_REAL;
2691 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2692 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2693 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2694
2695 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
2696 pModeData->uShwType = PGM_TYPE_32BIT;
2697 pModeData->uGstType = PGM_TYPE_PROT;
2698 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2699 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2700 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2701
2702 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
2703 pModeData->uShwType = PGM_TYPE_32BIT;
2704 pModeData->uGstType = PGM_TYPE_32BIT;
2705 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2706 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2707 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2708
2709 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
2710 pModeData->uShwType = PGM_TYPE_PAE;
2711 pModeData->uGstType = PGM_TYPE_REAL;
2712 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2713 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2714 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2715
2716 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
2717 pModeData->uShwType = PGM_TYPE_PAE;
2718 pModeData->uGstType = PGM_TYPE_PROT;
2719 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2720 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2721 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2722
2723 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
2724 pModeData->uShwType = PGM_TYPE_PAE;
2725 pModeData->uGstType = PGM_TYPE_32BIT;
2726 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2727 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2728 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2729
2730 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
2731 pModeData->uShwType = PGM_TYPE_PAE;
2732 pModeData->uGstType = PGM_TYPE_PAE;
2733 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2734 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2735 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2736
2737 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
2738 pModeData->uShwType = PGM_TYPE_AMD64;
2739 pModeData->uGstType = PGM_TYPE_AMD64;
2740 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2741 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2742 rc = PGM_BTH_NAME_AMD64_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2743
2744 /* The nested paging mode. */
2745 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_REAL)];
2746 pModeData->uShwType = PGM_TYPE_NESTED;
2747 pModeData->uGstType = PGM_TYPE_REAL;
2748 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2749 rc = PGM_BTH_NAME_NESTED_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2750
2751 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGMMODE_PROTECTED)];
2752 pModeData->uShwType = PGM_TYPE_NESTED;
2753 pModeData->uGstType = PGM_TYPE_PROT;
2754 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2755 rc = PGM_BTH_NAME_NESTED_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2756
2757 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_32BIT)];
2758 pModeData->uShwType = PGM_TYPE_NESTED;
2759 pModeData->uGstType = PGM_TYPE_32BIT;
2760 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2761 rc = PGM_BTH_NAME_NESTED_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2762
2763 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_PAE)];
2764 pModeData->uShwType = PGM_TYPE_NESTED;
2765 pModeData->uGstType = PGM_TYPE_PAE;
2766 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2767 rc = PGM_BTH_NAME_NESTED_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2768
2769 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
2770 pModeData->uShwType = PGM_TYPE_NESTED;
2771 pModeData->uGstType = PGM_TYPE_AMD64;
2772 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2773 rc = PGM_BTH_NAME_NESTED_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2774
2775 /* The shadow part of the nested callback mode depends on the host paging mode (AMD-V only). */
2776 switch(pVM->pgm.s.enmHostMode)
2777 {
2778 case SUPPAGINGMODE_32_BIT:
2779 case SUPPAGINGMODE_32_BIT_GLOBAL:
2780 for (unsigned i=PGM_TYPE_REAL;i<=PGM_TYPE_AMD64;i++)
2781 {
2782 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
2783 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2784 }
2785 break;
2786
2787 case SUPPAGINGMODE_PAE:
2788 case SUPPAGINGMODE_PAE_NX:
2789 case SUPPAGINGMODE_PAE_GLOBAL:
2790 case SUPPAGINGMODE_PAE_GLOBAL_NX:
2791 for (unsigned i=PGM_TYPE_REAL;i<=PGM_TYPE_AMD64;i++)
2792 {
2793 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
2794 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2795 }
2796 break;
2797
2798 case SUPPAGINGMODE_AMD64:
2799 case SUPPAGINGMODE_AMD64_GLOBAL:
2800 case SUPPAGINGMODE_AMD64_NX:
2801 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
2802 for (unsigned i=PGM_TYPE_REAL;i<=PGM_TYPE_AMD64;i++)
2803 {
2804 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
2805 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2806 }
2807 break;
2808 default:
2809 AssertFailed();
2810 break;
2811 }
2812
2813#ifdef PGM_WITH_EPT
2814 /* Extended paging (EPT) / Intel VT-x */
2815 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_REAL)];
2816 pModeData->uShwType = PGM_TYPE_EPT;
2817 pModeData->uGstType = PGM_TYPE_REAL;
2818 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2819 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2820 rc = PGM_BTH_NAME_EPT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2821
2822 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PROT)];
2823 pModeData->uShwType = PGM_TYPE_EPT;
2824 pModeData->uGstType = PGM_TYPE_PROT;
2825 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2826 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2827 rc = PGM_BTH_NAME_EPT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2828
2829 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_32BIT)];
2830 pModeData->uShwType = PGM_TYPE_EPT;
2831 pModeData->uGstType = PGM_TYPE_32BIT;
2832 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2833 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2834 rc = PGM_BTH_NAME_EPT_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2835
2836 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PAE)];
2837 pModeData->uShwType = PGM_TYPE_EPT;
2838 pModeData->uGstType = PGM_TYPE_PAE;
2839 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2840 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2841 rc = PGM_BTH_NAME_EPT_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2842
2843 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_AMD64)];
2844 pModeData->uShwType = PGM_TYPE_EPT;
2845 pModeData->uGstType = PGM_TYPE_AMD64;
2846 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2847 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2848 rc = PGM_BTH_NAME_EPT_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2849#endif /* PGM_WITH_EPT */
2850 return VINF_SUCCESS;
2851}
2852
2853
2854/**
2855 * Switch to different (or relocated in the relocate case) mode data.
2856 *
2857 * @param pVM The VM handle.
2858 * @param enmShw The the shadow paging mode.
2859 * @param enmGst The the guest paging mode.
2860 */
2861static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst)
2862{
2863 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndexByMode(enmShw, enmGst)];
2864
2865 Assert(pModeData->uGstType == pgmModeToType(enmGst));
2866 Assert(pModeData->uShwType == pgmModeToType(enmShw));
2867
2868 /* shadow */
2869 pVM->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
2870 pVM->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
2871 pVM->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
2872 Assert(pVM->pgm.s.pfnR3ShwGetPage);
2873 pVM->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
2874
2875 pVM->pgm.s.pfnGCShwGetPage = pModeData->pfnGCShwGetPage;
2876 pVM->pgm.s.pfnGCShwModifyPage = pModeData->pfnGCShwModifyPage;
2877
2878 pVM->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
2879 pVM->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
2880
2881
2882 /* guest */
2883 pVM->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
2884 pVM->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
2885 pVM->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
2886 Assert(pVM->pgm.s.pfnR3GstGetPage);
2887 pVM->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
2888 pVM->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
2889 pVM->pgm.s.pfnR3GstMonitorCR3 = pModeData->pfnR3GstMonitorCR3;
2890 pVM->pgm.s.pfnR3GstUnmonitorCR3 = pModeData->pfnR3GstUnmonitorCR3;
2891 pVM->pgm.s.pfnR3GstMapCR3 = pModeData->pfnR3GstMapCR3;
2892 pVM->pgm.s.pfnR3GstUnmapCR3 = pModeData->pfnR3GstUnmapCR3;
2893 pVM->pgm.s.pfnR3GstWriteHandlerCR3 = pModeData->pfnR3GstWriteHandlerCR3;
2894 pVM->pgm.s.pszR3GstWriteHandlerCR3 = pModeData->pszR3GstWriteHandlerCR3;
2895 pVM->pgm.s.pfnR3GstPAEWriteHandlerCR3 = pModeData->pfnR3GstPAEWriteHandlerCR3;
2896 pVM->pgm.s.pszR3GstPAEWriteHandlerCR3 = pModeData->pszR3GstPAEWriteHandlerCR3;
2897
2898 pVM->pgm.s.pfnGCGstGetPage = pModeData->pfnGCGstGetPage;
2899 pVM->pgm.s.pfnGCGstModifyPage = pModeData->pfnGCGstModifyPage;
2900 pVM->pgm.s.pfnGCGstGetPDE = pModeData->pfnGCGstGetPDE;
2901 pVM->pgm.s.pfnGCGstMonitorCR3 = pModeData->pfnGCGstMonitorCR3;
2902 pVM->pgm.s.pfnGCGstUnmonitorCR3 = pModeData->pfnGCGstUnmonitorCR3;
2903 pVM->pgm.s.pfnGCGstMapCR3 = pModeData->pfnGCGstMapCR3;
2904 pVM->pgm.s.pfnGCGstUnmapCR3 = pModeData->pfnGCGstUnmapCR3;
2905 pVM->pgm.s.pfnGCGstWriteHandlerCR3 = pModeData->pfnGCGstWriteHandlerCR3;
2906 pVM->pgm.s.pfnGCGstPAEWriteHandlerCR3 = pModeData->pfnGCGstPAEWriteHandlerCR3;
2907
2908 pVM->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
2909 pVM->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
2910 pVM->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
2911 pVM->pgm.s.pfnR0GstMonitorCR3 = pModeData->pfnR0GstMonitorCR3;
2912 pVM->pgm.s.pfnR0GstUnmonitorCR3 = pModeData->pfnR0GstUnmonitorCR3;
2913 pVM->pgm.s.pfnR0GstMapCR3 = pModeData->pfnR0GstMapCR3;
2914 pVM->pgm.s.pfnR0GstUnmapCR3 = pModeData->pfnR0GstUnmapCR3;
2915 pVM->pgm.s.pfnR0GstWriteHandlerCR3 = pModeData->pfnR0GstWriteHandlerCR3;
2916 pVM->pgm.s.pfnR0GstPAEWriteHandlerCR3 = pModeData->pfnR0GstPAEWriteHandlerCR3;
2917
2918
2919 /* both */
2920 pVM->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
2921 pVM->pgm.s.pfnR3BthTrap0eHandler = pModeData->pfnR3BthTrap0eHandler;
2922 pVM->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
2923 pVM->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
2924 Assert(pVM->pgm.s.pfnR3BthSyncCR3);
2925 pVM->pgm.s.pfnR3BthSyncPage = pModeData->pfnR3BthSyncPage;
2926 pVM->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
2927 pVM->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
2928#ifdef VBOX_STRICT
2929 pVM->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
2930#endif
2931
2932 pVM->pgm.s.pfnGCBthTrap0eHandler = pModeData->pfnGCBthTrap0eHandler;
2933 pVM->pgm.s.pfnGCBthInvalidatePage = pModeData->pfnGCBthInvalidatePage;
2934 pVM->pgm.s.pfnGCBthSyncCR3 = pModeData->pfnGCBthSyncCR3;
2935 pVM->pgm.s.pfnGCBthSyncPage = pModeData->pfnGCBthSyncPage;
2936 pVM->pgm.s.pfnGCBthPrefetchPage = pModeData->pfnGCBthPrefetchPage;
2937 pVM->pgm.s.pfnGCBthVerifyAccessSyncPage = pModeData->pfnGCBthVerifyAccessSyncPage;
2938#ifdef VBOX_STRICT
2939 pVM->pgm.s.pfnGCBthAssertCR3 = pModeData->pfnGCBthAssertCR3;
2940#endif
2941
2942 pVM->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
2943 pVM->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
2944 pVM->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
2945 pVM->pgm.s.pfnR0BthSyncPage = pModeData->pfnR0BthSyncPage;
2946 pVM->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
2947 pVM->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
2948#ifdef VBOX_STRICT
2949 pVM->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
2950#endif
2951}
2952
2953
2954#ifdef DEBUG_bird
2955#include <stdlib.h> /* getenv() remove me! */
2956#endif
2957
2958/**
2959 * Calculates the shadow paging mode.
2960 *
2961 * @returns The shadow paging mode.
2962 * @param pVM VM handle.
2963 * @param enmGuestMode The guest mode.
2964 * @param enmHostMode The host mode.
2965 * @param enmShadowMode The current shadow mode.
2966 * @param penmSwitcher Where to store the switcher to use.
2967 * VMMSWITCHER_INVALID means no change.
2968 */
2969static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
2970{
2971 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
2972 switch (enmGuestMode)
2973 {
2974 /*
2975 * When switching to real or protected mode we don't change
2976 * anything since it's likely that we'll switch back pretty soon.
2977 *
2978 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
2979 * and is supposed to determine which shadow paging and switcher to
2980 * use during init.
2981 */
2982 case PGMMODE_REAL:
2983 case PGMMODE_PROTECTED:
2984 if ( enmShadowMode != PGMMODE_INVALID
2985 && !HWACCMIsEnabled(pVM) /* always switch in hwaccm mode! */)
2986 break; /* (no change) */
2987
2988 switch (enmHostMode)
2989 {
2990 case SUPPAGINGMODE_32_BIT:
2991 case SUPPAGINGMODE_32_BIT_GLOBAL:
2992 enmShadowMode = PGMMODE_32_BIT;
2993 enmSwitcher = VMMSWITCHER_32_TO_32;
2994 break;
2995
2996 case SUPPAGINGMODE_PAE:
2997 case SUPPAGINGMODE_PAE_NX:
2998 case SUPPAGINGMODE_PAE_GLOBAL:
2999 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3000 enmShadowMode = PGMMODE_PAE;
3001 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3002#ifdef DEBUG_bird
3003if (getenv("VBOX_32BIT"))
3004{
3005 enmShadowMode = PGMMODE_32_BIT;
3006 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3007}
3008#endif
3009 break;
3010
3011 case SUPPAGINGMODE_AMD64:
3012 case SUPPAGINGMODE_AMD64_GLOBAL:
3013 case SUPPAGINGMODE_AMD64_NX:
3014 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3015 enmShadowMode = PGMMODE_PAE;
3016 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3017 break;
3018
3019 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3020 }
3021 break;
3022
3023 case PGMMODE_32_BIT:
3024 switch (enmHostMode)
3025 {
3026 case SUPPAGINGMODE_32_BIT:
3027 case SUPPAGINGMODE_32_BIT_GLOBAL:
3028 enmShadowMode = PGMMODE_32_BIT;
3029 enmSwitcher = VMMSWITCHER_32_TO_32;
3030 break;
3031
3032 case SUPPAGINGMODE_PAE:
3033 case SUPPAGINGMODE_PAE_NX:
3034 case SUPPAGINGMODE_PAE_GLOBAL:
3035 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3036 enmShadowMode = PGMMODE_PAE;
3037 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3038#ifdef DEBUG_bird
3039if (getenv("VBOX_32BIT"))
3040{
3041 enmShadowMode = PGMMODE_32_BIT;
3042 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3043}
3044#endif
3045 break;
3046
3047 case SUPPAGINGMODE_AMD64:
3048 case SUPPAGINGMODE_AMD64_GLOBAL:
3049 case SUPPAGINGMODE_AMD64_NX:
3050 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3051 enmShadowMode = PGMMODE_PAE;
3052 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3053 break;
3054
3055 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3056 }
3057 break;
3058
3059 case PGMMODE_PAE:
3060 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
3061 switch (enmHostMode)
3062 {
3063 case SUPPAGINGMODE_32_BIT:
3064 case SUPPAGINGMODE_32_BIT_GLOBAL:
3065 enmShadowMode = PGMMODE_PAE;
3066 enmSwitcher = VMMSWITCHER_32_TO_PAE;
3067 break;
3068
3069 case SUPPAGINGMODE_PAE:
3070 case SUPPAGINGMODE_PAE_NX:
3071 case SUPPAGINGMODE_PAE_GLOBAL:
3072 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3073 enmShadowMode = PGMMODE_PAE;
3074 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3075 break;
3076
3077 case SUPPAGINGMODE_AMD64:
3078 case SUPPAGINGMODE_AMD64_GLOBAL:
3079 case SUPPAGINGMODE_AMD64_NX:
3080 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3081 enmShadowMode = PGMMODE_PAE;
3082 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3083 break;
3084
3085 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3086 }
3087 break;
3088
3089 case PGMMODE_AMD64:
3090 case PGMMODE_AMD64_NX:
3091 switch (enmHostMode)
3092 {
3093 case SUPPAGINGMODE_32_BIT:
3094 case SUPPAGINGMODE_32_BIT_GLOBAL:
3095 enmShadowMode = PGMMODE_PAE;
3096 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
3097 break;
3098
3099 case SUPPAGINGMODE_PAE:
3100 case SUPPAGINGMODE_PAE_NX:
3101 case SUPPAGINGMODE_PAE_GLOBAL:
3102 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3103 enmShadowMode = PGMMODE_PAE;
3104 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
3105 break;
3106
3107 case SUPPAGINGMODE_AMD64:
3108 case SUPPAGINGMODE_AMD64_GLOBAL:
3109 case SUPPAGINGMODE_AMD64_NX:
3110 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3111 enmShadowMode = PGMMODE_AMD64;
3112 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
3113 break;
3114
3115 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3116 }
3117 break;
3118
3119
3120 default:
3121 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3122 return PGMMODE_INVALID;
3123 }
3124 /* Override the shadow mode is nested paging is active. */
3125 if (HWACCMIsNestedPagingActive(pVM))
3126 enmShadowMode = HWACCMGetPagingMode(pVM);
3127
3128 *penmSwitcher = enmSwitcher;
3129 return enmShadowMode;
3130}
3131
3132/**
3133 * Performs the actual mode change.
3134 * This is called by PGMChangeMode and pgmR3InitPaging().
3135 *
3136 * @returns VBox status code.
3137 * @param pVM VM handle.
3138 * @param enmGuestMode The new guest mode. This is assumed to be different from
3139 * the current mode.
3140 */
3141PGMR3DECL(int) PGMR3ChangeMode(PVM pVM, PGMMODE enmGuestMode)
3142{
3143 LogFlow(("PGMR3ChangeMode: Guest mode: %s -> %s\n", PGMGetModeName(pVM->pgm.s.enmGuestMode), PGMGetModeName(enmGuestMode)));
3144 STAM_REL_COUNTER_INC(&pVM->pgm.s.cGuestModeChanges);
3145
3146 /*
3147 * Calc the shadow mode and switcher.
3148 */
3149 VMMSWITCHER enmSwitcher;
3150 PGMMODE enmShadowMode = pgmR3CalcShadowMode(pVM, enmGuestMode, pVM->pgm.s.enmHostMode, pVM->pgm.s.enmShadowMode, &enmSwitcher);
3151 if (enmSwitcher != VMMSWITCHER_INVALID)
3152 {
3153 /*
3154 * Select new switcher.
3155 */
3156 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
3157 if (VBOX_FAILURE(rc))
3158 {
3159 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Vrc\n", enmSwitcher, rc));
3160 return rc;
3161 }
3162 }
3163
3164 /*
3165 * Exit old mode(s).
3166 */
3167 /* shadow */
3168 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
3169 {
3170 LogFlow(("PGMR3ChangeMode: Shadow mode: %s -> %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode), PGMGetModeName(enmShadowMode)));
3171 if (PGM_SHW_PFN(Exit, pVM))
3172 {
3173 int rc = PGM_SHW_PFN(Exit, pVM)(pVM);
3174 if (VBOX_FAILURE(rc))
3175 {
3176 AssertMsgFailed(("Exit failed for shadow mode %d: %Vrc\n", pVM->pgm.s.enmShadowMode, rc));
3177 return rc;
3178 }
3179 }
3180
3181 }
3182 else
3183 LogFlow(("PGMR3ChangeMode: Shadow mode remains: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode)));
3184
3185 /* guest */
3186 if (PGM_GST_PFN(Exit, pVM))
3187 {
3188 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
3189 if (VBOX_FAILURE(rc))
3190 {
3191 AssertMsgFailed(("Exit failed for guest mode %d: %Vrc\n", pVM->pgm.s.enmGuestMode, rc));
3192 return rc;
3193 }
3194 }
3195
3196 /*
3197 * Load new paging mode data.
3198 */
3199 pgmR3ModeDataSwitch(pVM, enmShadowMode, enmGuestMode);
3200
3201 /*
3202 * Enter new shadow mode (if changed).
3203 */
3204 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
3205 {
3206 int rc;
3207 pVM->pgm.s.enmShadowMode = enmShadowMode;
3208 switch (enmShadowMode)
3209 {
3210 case PGMMODE_32_BIT:
3211 rc = PGM_SHW_NAME_32BIT(Enter)(pVM);
3212 break;
3213 case PGMMODE_PAE:
3214 case PGMMODE_PAE_NX:
3215 rc = PGM_SHW_NAME_PAE(Enter)(pVM);
3216 break;
3217 case PGMMODE_AMD64:
3218 case PGMMODE_AMD64_NX:
3219 rc = PGM_SHW_NAME_AMD64(Enter)(pVM);
3220 break;
3221 case PGMMODE_NESTED:
3222 rc = PGM_SHW_NAME_NESTED(Enter)(pVM);
3223 break;
3224#ifdef PGM_WITH_EPT
3225 case PGMMODE_EPT:
3226 rc = PGM_SHW_NAME_EPT(Enter)(pVM);
3227 break;
3228#endif
3229 case PGMMODE_REAL:
3230 case PGMMODE_PROTECTED:
3231 default:
3232 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
3233 return VERR_INTERNAL_ERROR;
3234 }
3235 if (VBOX_FAILURE(rc))
3236 {
3237 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Vrc\n", enmShadowMode, rc));
3238 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
3239 return rc;
3240 }
3241 }
3242
3243 /* We must flush the PGM pool cache if the guest mode changes; we don't always
3244 * switch shadow paging mode (e.g. protected->32-bit) and shouldn't reuse
3245 * the shadow page tables.
3246 *
3247 * That only applies when switching between paging and non-paging modes.
3248 *
3249 * @todo A20 setting
3250 */
3251 if ( pVM->pgm.s.CTXSUFF(pPool)
3252 && !HWACCMIsNestedPagingActive(pVM)
3253 && PGMMODE_WITH_PAGING(pVM->pgm.s.enmGuestMode) != PGMMODE_WITH_PAGING(enmGuestMode))
3254 {
3255 Log(("PGMR3ChangeMode: changing guest paging mode -> flush pgm pool cache!\n"));
3256 pgmPoolFlushAll(pVM);
3257 }
3258
3259 /*
3260 * Enter the new guest and shadow+guest modes.
3261 */
3262 int rc = -1;
3263 int rc2 = -1;
3264 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
3265 pVM->pgm.s.enmGuestMode = enmGuestMode;
3266 switch (enmGuestMode)
3267 {
3268 case PGMMODE_REAL:
3269 rc = PGM_GST_NAME_REAL(Enter)(pVM, NIL_RTGCPHYS);
3270 switch (pVM->pgm.s.enmShadowMode)
3271 {
3272 case PGMMODE_32_BIT:
3273 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVM, NIL_RTGCPHYS);
3274 break;
3275 case PGMMODE_PAE:
3276 case PGMMODE_PAE_NX:
3277 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVM, NIL_RTGCPHYS);
3278 break;
3279 case PGMMODE_NESTED:
3280 rc2 = PGM_BTH_NAME_NESTED_REAL(Enter)(pVM, NIL_RTGCPHYS);
3281 break;
3282#ifdef PGM_WITH_EPT
3283 case PGMMODE_EPT:
3284 rc2 = PGM_BTH_NAME_EPT_REAL(Enter)(pVM, NIL_RTGCPHYS);
3285 break;
3286#endif
3287 case PGMMODE_AMD64:
3288 case PGMMODE_AMD64_NX:
3289 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3290 default: AssertFailed(); break;
3291 }
3292 break;
3293
3294 case PGMMODE_PROTECTED:
3295 rc = PGM_GST_NAME_PROT(Enter)(pVM, NIL_RTGCPHYS);
3296 switch (pVM->pgm.s.enmShadowMode)
3297 {
3298 case PGMMODE_32_BIT:
3299 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVM, NIL_RTGCPHYS);
3300 break;
3301 case PGMMODE_PAE:
3302 case PGMMODE_PAE_NX:
3303 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVM, NIL_RTGCPHYS);
3304 break;
3305 case PGMMODE_NESTED:
3306 rc2 = PGM_BTH_NAME_NESTED_PROT(Enter)(pVM, NIL_RTGCPHYS);
3307 break;
3308#ifdef PGM_WITH_EPT
3309 case PGMMODE_EPT:
3310 rc2 = PGM_BTH_NAME_EPT_PROT(Enter)(pVM, NIL_RTGCPHYS);
3311 break;
3312#endif
3313 case PGMMODE_AMD64:
3314 case PGMMODE_AMD64_NX:
3315 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3316 default: AssertFailed(); break;
3317 }
3318 break;
3319
3320 case PGMMODE_32_BIT:
3321 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK;
3322 rc = PGM_GST_NAME_32BIT(Enter)(pVM, GCPhysCR3);
3323 switch (pVM->pgm.s.enmShadowMode)
3324 {
3325 case PGMMODE_32_BIT:
3326 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVM, GCPhysCR3);
3327 break;
3328 case PGMMODE_PAE:
3329 case PGMMODE_PAE_NX:
3330 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVM, GCPhysCR3);
3331 break;
3332 case PGMMODE_NESTED:
3333 rc2 = PGM_BTH_NAME_NESTED_32BIT(Enter)(pVM, GCPhysCR3);
3334 break;
3335#ifdef PGM_WITH_EPT
3336 case PGMMODE_EPT:
3337 rc2 = PGM_BTH_NAME_EPT_32BIT(Enter)(pVM, GCPhysCR3);
3338 break;
3339#endif
3340 case PGMMODE_AMD64:
3341 case PGMMODE_AMD64_NX:
3342 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3343 default: AssertFailed(); break;
3344 }
3345 break;
3346
3347 case PGMMODE_PAE_NX:
3348 case PGMMODE_PAE:
3349 {
3350 uint32_t u32Dummy, u32Features;
3351
3352 CPUMGetGuestCpuId(pVM, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
3353 if (!(u32Features & X86_CPUID_FEATURE_EDX_PAE))
3354 {
3355 /* Pause first, then inform Main. */
3356 rc = VMR3SuspendNoSave(pVM);
3357 AssertRC(rc);
3358
3359 VMSetRuntimeError(pVM, true, "PAEmode",
3360 N_("The guest is trying to switch to the PAE mode which is currently disabled by default in VirtualBox. Experimental PAE support can be enabled using the -pae option with VBoxManage"));
3361 /* we must return TRUE here otherwise the recompiler will assert */
3362 return VINF_SUCCESS;
3363 }
3364 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAE_PAGE_MASK;
3365 rc = PGM_GST_NAME_PAE(Enter)(pVM, GCPhysCR3);
3366 switch (pVM->pgm.s.enmShadowMode)
3367 {
3368 case PGMMODE_PAE:
3369 case PGMMODE_PAE_NX:
3370 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVM, GCPhysCR3);
3371 break;
3372 case PGMMODE_NESTED:
3373 rc2 = PGM_BTH_NAME_NESTED_PAE(Enter)(pVM, GCPhysCR3);
3374 break;
3375#ifdef PGM_WITH_EPT
3376 case PGMMODE_EPT:
3377 rc2 = PGM_BTH_NAME_EPT_PAE(Enter)(pVM, GCPhysCR3);
3378 break;
3379#endif
3380 case PGMMODE_32_BIT:
3381 case PGMMODE_AMD64:
3382 case PGMMODE_AMD64_NX:
3383 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3384 default: AssertFailed(); break;
3385 }
3386 break;
3387 }
3388
3389 case PGMMODE_AMD64_NX:
3390 case PGMMODE_AMD64:
3391 GCPhysCR3 = CPUMGetGuestCR3(pVM) & 0xfffffffffffff000ULL; /** @todo define this mask! */
3392 rc = PGM_GST_NAME_AMD64(Enter)(pVM, GCPhysCR3);
3393 switch (pVM->pgm.s.enmShadowMode)
3394 {
3395 case PGMMODE_AMD64:
3396 case PGMMODE_AMD64_NX:
3397 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVM, GCPhysCR3);
3398 break;
3399 case PGMMODE_NESTED:
3400 rc2 = PGM_BTH_NAME_NESTED_AMD64(Enter)(pVM, GCPhysCR3);
3401 break;
3402#ifdef PGM_WITH_EPT
3403 case PGMMODE_EPT:
3404 rc2 = PGM_BTH_NAME_EPT_AMD64(Enter)(pVM, GCPhysCR3);
3405 break;
3406#endif
3407 case PGMMODE_32_BIT:
3408 case PGMMODE_PAE:
3409 case PGMMODE_PAE_NX:
3410 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
3411 default: AssertFailed(); break;
3412 }
3413 break;
3414
3415 default:
3416 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3417 rc = VERR_NOT_IMPLEMENTED;
3418 break;
3419 }
3420
3421 /* status codes. */
3422 AssertRC(rc);
3423 AssertRC(rc2);
3424 if (VBOX_SUCCESS(rc))
3425 {
3426 rc = rc2;
3427 if (VBOX_SUCCESS(rc)) /* no informational status codes. */
3428 rc = VINF_SUCCESS;
3429 }
3430
3431 /*
3432 * Notify SELM so it can update the TSSes with correct CR3s.
3433 */
3434 SELMR3PagingModeChanged(pVM);
3435
3436 /* Notify HWACCM as well. */
3437 HWACCMR3PagingModeChanged(pVM, pVM->pgm.s.enmShadowMode);
3438 return rc;
3439}
3440
3441
3442/**
3443 * Dumps a PAE shadow page table.
3444 *
3445 * @returns VBox status code (VINF_SUCCESS).
3446 * @param pVM The VM handle.
3447 * @param pPT Pointer to the page table.
3448 * @param u64Address The virtual address of the page table starts.
3449 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3450 * @param cMaxDepth The maxium depth.
3451 * @param pHlp Pointer to the output functions.
3452 */
3453static int pgmR3DumpHierarchyHCPaePT(PVM pVM, PX86PTPAE pPT, uint64_t u64Address, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3454{
3455 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3456 {
3457 X86PTEPAE Pte = pPT->a[i];
3458 if (Pte.n.u1Present)
3459 {
3460 pHlp->pfnPrintf(pHlp,
3461 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3462 ? "%016llx 3 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n"
3463 : "%08llx 2 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n",
3464 u64Address + ((uint64_t)i << X86_PT_PAE_SHIFT),
3465 Pte.n.u1Write ? 'W' : 'R',
3466 Pte.n.u1User ? 'U' : 'S',
3467 Pte.n.u1Accessed ? 'A' : '-',
3468 Pte.n.u1Dirty ? 'D' : '-',
3469 Pte.n.u1Global ? 'G' : '-',
3470 Pte.n.u1WriteThru ? "WT" : "--",
3471 Pte.n.u1CacheDisable? "CD" : "--",
3472 Pte.n.u1PAT ? "AT" : "--",
3473 Pte.n.u1NoExecute ? "NX" : "--",
3474 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3475 Pte.u & RT_BIT(10) ? '1' : '0',
3476 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED? 'v' : '-',
3477 Pte.u & X86_PTE_PAE_PG_MASK);
3478 }
3479 }
3480 return VINF_SUCCESS;
3481}
3482
3483
3484/**
3485 * Dumps a PAE shadow page directory table.
3486 *
3487 * @returns VBox status code (VINF_SUCCESS).
3488 * @param pVM The VM handle.
3489 * @param HCPhys The physical address of the page directory table.
3490 * @param u64Address The virtual address of the page table starts.
3491 * @param cr4 The CR4, PSE is currently used.
3492 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3493 * @param cMaxDepth The maxium depth.
3494 * @param pHlp Pointer to the output functions.
3495 */
3496static int pgmR3DumpHierarchyHCPaePD(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3497{
3498 PX86PDPAE pPD = (PX86PDPAE)MMPagePhys2Page(pVM, HCPhys);
3499 if (!pPD)
3500 {
3501 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory at HCPhys=%#VHp was not found in the page pool!\n",
3502 fLongMode ? 16 : 8, u64Address, HCPhys);
3503 return VERR_INVALID_PARAMETER;
3504 }
3505 const bool fBigPagesSupported = fLongMode || !!(cr4 & X86_CR4_PSE);
3506
3507 int rc = VINF_SUCCESS;
3508 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
3509 {
3510 X86PDEPAE Pde = pPD->a[i];
3511 if (Pde.n.u1Present)
3512 {
3513 if (fBigPagesSupported && Pde.b.u1Size)
3514 pHlp->pfnPrintf(pHlp,
3515 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3516 ? "%016llx 2 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n"
3517 : "%08llx 1 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n",
3518 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3519 Pde.b.u1Write ? 'W' : 'R',
3520 Pde.b.u1User ? 'U' : 'S',
3521 Pde.b.u1Accessed ? 'A' : '-',
3522 Pde.b.u1Dirty ? 'D' : '-',
3523 Pde.b.u1Global ? 'G' : '-',
3524 Pde.b.u1WriteThru ? "WT" : "--",
3525 Pde.b.u1CacheDisable? "CD" : "--",
3526 Pde.b.u1PAT ? "AT" : "--",
3527 Pde.b.u1NoExecute ? "NX" : "--",
3528 Pde.u & RT_BIT_64(9) ? '1' : '0',
3529 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3530 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3531 Pde.u & X86_PDE_PAE_PG_MASK);
3532 else
3533 {
3534 pHlp->pfnPrintf(pHlp,
3535 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3536 ? "%016llx 2 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n"
3537 : "%08llx 1 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n",
3538 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3539 Pde.n.u1Write ? 'W' : 'R',
3540 Pde.n.u1User ? 'U' : 'S',
3541 Pde.n.u1Accessed ? 'A' : '-',
3542 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3543 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3544 Pde.n.u1WriteThru ? "WT" : "--",
3545 Pde.n.u1CacheDisable? "CD" : "--",
3546 Pde.n.u1NoExecute ? "NX" : "--",
3547 Pde.u & RT_BIT_64(9) ? '1' : '0',
3548 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3549 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3550 Pde.u & X86_PDE_PAE_PG_MASK);
3551 if (cMaxDepth >= 1)
3552 {
3553 /** @todo what about using the page pool for mapping PTs? */
3554 uint64_t u64AddressPT = u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT);
3555 RTHCPHYS HCPhysPT = Pde.u & X86_PDE_PAE_PG_MASK;
3556 PX86PTPAE pPT = NULL;
3557 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
3558 pPT = (PX86PTPAE)MMPagePhys2Page(pVM, HCPhysPT);
3559 else
3560 {
3561 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
3562 {
3563 uint64_t off = u64AddressPT - pMap->GCPtr;
3564 if (off < pMap->cb)
3565 {
3566 const int iPDE = (uint32_t)(off >> X86_PD_SHIFT);
3567 const int iSub = (int)((off >> X86_PD_PAE_SHIFT) & 1); /* MSC is a pain sometimes */
3568 if ((iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0) != HCPhysPT)
3569 pHlp->pfnPrintf(pHlp, "%0*llx error! Mapping error! PT %d has HCPhysPT=%VHp not %VHp is in the PD.\n",
3570 fLongMode ? 16 : 8, u64AddressPT, iPDE,
3571 iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0, HCPhysPT);
3572 pPT = &pMap->aPTs[iPDE].paPaePTsR3[iSub];
3573 }
3574 }
3575 }
3576 int rc2 = VERR_INVALID_PARAMETER;
3577 if (pPT)
3578 rc2 = pgmR3DumpHierarchyHCPaePT(pVM, pPT, u64AddressPT, fLongMode, cMaxDepth - 1, pHlp);
3579 else
3580 pHlp->pfnPrintf(pHlp, "%0*llx error! Page table at HCPhys=%#VHp was not found in the page pool!\n",
3581 fLongMode ? 16 : 8, u64AddressPT, HCPhysPT);
3582 if (rc2 < rc && VBOX_SUCCESS(rc))
3583 rc = rc2;
3584 }
3585 }
3586 }
3587 }
3588 return rc;
3589}
3590
3591
3592/**
3593 * Dumps a PAE shadow page directory pointer table.
3594 *
3595 * @returns VBox status code (VINF_SUCCESS).
3596 * @param pVM The VM handle.
3597 * @param HCPhys The physical address of the page directory pointer table.
3598 * @param u64Address The virtual address of the page table starts.
3599 * @param cr4 The CR4, PSE is currently used.
3600 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3601 * @param cMaxDepth The maxium depth.
3602 * @param pHlp Pointer to the output functions.
3603 */
3604static int pgmR3DumpHierarchyHCPaePDPT(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3605{
3606 PX86PDPT pPDPT = (PX86PDPT)MMPagePhys2Page(pVM, HCPhys);
3607 if (!pPDPT)
3608 {
3609 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory pointer table at HCPhys=%#VHp was not found in the page pool!\n",
3610 fLongMode ? 16 : 8, u64Address, HCPhys);
3611 return VERR_INVALID_PARAMETER;
3612 }
3613
3614 int rc = VINF_SUCCESS;
3615 const unsigned c = fLongMode ? RT_ELEMENTS(pPDPT->a) : X86_PG_PAE_PDPE_ENTRIES;
3616 for (unsigned i = 0; i < c; i++)
3617 {
3618 X86PDPE Pdpe = pPDPT->a[i];
3619 if (Pdpe.n.u1Present)
3620 {
3621 if (fLongMode)
3622 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3623 "%016llx 1 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3624 u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3625 Pdpe.lm.u1Write ? 'W' : 'R',
3626 Pdpe.lm.u1User ? 'U' : 'S',
3627 Pdpe.lm.u1Accessed ? 'A' : '-',
3628 Pdpe.lm.u3Reserved & 1? '?' : '.', /* ignored */
3629 Pdpe.lm.u3Reserved & 4? '!' : '.', /* mbz */
3630 Pdpe.lm.u1WriteThru ? "WT" : "--",
3631 Pdpe.lm.u1CacheDisable? "CD" : "--",
3632 Pdpe.lm.u3Reserved & 2? "!" : "..",/* mbz */
3633 Pdpe.lm.u1NoExecute ? "NX" : "--",
3634 Pdpe.u & RT_BIT(9) ? '1' : '0',
3635 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3636 Pdpe.u & RT_BIT(11) ? '1' : '0',
3637 Pdpe.u & X86_PDPE_PG_MASK);
3638 else
3639 pHlp->pfnPrintf(pHlp, /*P G WT CD AT NX 4M a p ? */
3640 "%08x 0 | P %c %s %s %s %s .. %c%c%c %016llx\n",
3641 i << X86_PDPT_SHIFT,
3642 Pdpe.n.u4Reserved & 1? '!' : '.', /* mbz */
3643 Pdpe.n.u4Reserved & 4? '!' : '.', /* mbz */
3644 Pdpe.n.u1WriteThru ? "WT" : "--",
3645 Pdpe.n.u1CacheDisable? "CD" : "--",
3646 Pdpe.n.u4Reserved & 2? "!" : "..",/* mbz */
3647 Pdpe.u & RT_BIT(9) ? '1' : '0',
3648 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3649 Pdpe.u & RT_BIT(11) ? '1' : '0',
3650 Pdpe.u & X86_PDPE_PG_MASK);
3651 if (cMaxDepth >= 1)
3652 {
3653 int rc2 = pgmR3DumpHierarchyHCPaePD(pVM, Pdpe.u & X86_PDPE_PG_MASK, u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3654 cr4, fLongMode, cMaxDepth - 1, pHlp);
3655 if (rc2 < rc && VBOX_SUCCESS(rc))
3656 rc = rc2;
3657 }
3658 }
3659 }
3660 return rc;
3661}
3662
3663
3664/**
3665 * Dumps a 32-bit shadow page table.
3666 *
3667 * @returns VBox status code (VINF_SUCCESS).
3668 * @param pVM The VM handle.
3669 * @param HCPhys The physical address of the table.
3670 * @param cr4 The CR4, PSE is currently used.
3671 * @param cMaxDepth The maxium depth.
3672 * @param pHlp Pointer to the output functions.
3673 */
3674static int pgmR3DumpHierarchyHcPaePML4(PVM pVM, RTHCPHYS HCPhys, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3675{
3676 PX86PML4 pPML4 = (PX86PML4)MMPagePhys2Page(pVM, HCPhys);
3677 if (!pPML4)
3678 {
3679 pHlp->pfnPrintf(pHlp, "Page map level 4 at HCPhys=%#VHp was not found in the page pool!\n", HCPhys);
3680 return VERR_INVALID_PARAMETER;
3681 }
3682
3683 int rc = VINF_SUCCESS;
3684 for (unsigned i = 0; i < RT_ELEMENTS(pPML4->a); i++)
3685 {
3686 X86PML4E Pml4e = pPML4->a[i];
3687 if (Pml4e.n.u1Present)
3688 {
3689 uint64_t u64Address = ((uint64_t)i << X86_PML4_SHIFT) | (((uint64_t)i >> (X86_PML4_SHIFT - X86_PDPT_SHIFT - 1)) * 0xffff000000000000ULL);
3690 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3691 "%016llx 0 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3692 u64Address,
3693 Pml4e.n.u1Write ? 'W' : 'R',
3694 Pml4e.n.u1User ? 'U' : 'S',
3695 Pml4e.n.u1Accessed ? 'A' : '-',
3696 Pml4e.n.u3Reserved & 1? '?' : '.', /* ignored */
3697 Pml4e.n.u3Reserved & 4? '!' : '.', /* mbz */
3698 Pml4e.n.u1WriteThru ? "WT" : "--",
3699 Pml4e.n.u1CacheDisable? "CD" : "--",
3700 Pml4e.n.u3Reserved & 2? "!" : "..",/* mbz */
3701 Pml4e.n.u1NoExecute ? "NX" : "--",
3702 Pml4e.u & RT_BIT(9) ? '1' : '0',
3703 Pml4e.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3704 Pml4e.u & RT_BIT(11) ? '1' : '0',
3705 Pml4e.u & X86_PML4E_PG_MASK);
3706
3707 if (cMaxDepth >= 1)
3708 {
3709 int rc2 = pgmR3DumpHierarchyHCPaePDPT(pVM, Pml4e.u & X86_PML4E_PG_MASK, u64Address, cr4, true, cMaxDepth - 1, pHlp);
3710 if (rc2 < rc && VBOX_SUCCESS(rc))
3711 rc = rc2;
3712 }
3713 }
3714 }
3715 return rc;
3716}
3717
3718
3719/**
3720 * Dumps a 32-bit shadow page table.
3721 *
3722 * @returns VBox status code (VINF_SUCCESS).
3723 * @param pVM The VM handle.
3724 * @param pPT Pointer to the page table.
3725 * @param u32Address The virtual address this table starts at.
3726 * @param pHlp Pointer to the output functions.
3727 */
3728int pgmR3DumpHierarchyHC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, PCDBGFINFOHLP pHlp)
3729{
3730 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3731 {
3732 X86PTE Pte = pPT->a[i];
3733 if (Pte.n.u1Present)
3734 {
3735 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3736 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
3737 u32Address + (i << X86_PT_SHIFT),
3738 Pte.n.u1Write ? 'W' : 'R',
3739 Pte.n.u1User ? 'U' : 'S',
3740 Pte.n.u1Accessed ? 'A' : '-',
3741 Pte.n.u1Dirty ? 'D' : '-',
3742 Pte.n.u1Global ? 'G' : '-',
3743 Pte.n.u1WriteThru ? "WT" : "--",
3744 Pte.n.u1CacheDisable? "CD" : "--",
3745 Pte.n.u1PAT ? "AT" : "--",
3746 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3747 Pte.u & RT_BIT(10) ? '1' : '0',
3748 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
3749 Pte.u & X86_PDE_PG_MASK);
3750 }
3751 }
3752 return VINF_SUCCESS;
3753}
3754
3755
3756/**
3757 * Dumps a 32-bit shadow page directory and page tables.
3758 *
3759 * @returns VBox status code (VINF_SUCCESS).
3760 * @param pVM The VM handle.
3761 * @param cr3 The root of the hierarchy.
3762 * @param cr4 The CR4, PSE is currently used.
3763 * @param cMaxDepth How deep into the hierarchy the dumper should go.
3764 * @param pHlp Pointer to the output functions.
3765 */
3766int pgmR3DumpHierarchyHC32BitPD(PVM pVM, uint32_t cr3, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3767{
3768 PX86PD pPD = (PX86PD)MMPagePhys2Page(pVM, cr3 & X86_CR3_PAGE_MASK);
3769 if (!pPD)
3770 {
3771 pHlp->pfnPrintf(pHlp, "Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK);
3772 return VERR_INVALID_PARAMETER;
3773 }
3774
3775 int rc = VINF_SUCCESS;
3776 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
3777 {
3778 X86PDE Pde = pPD->a[i];
3779 if (Pde.n.u1Present)
3780 {
3781 const uint32_t u32Address = i << X86_PD_SHIFT;
3782 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
3783 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3784 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
3785 u32Address,
3786 Pde.b.u1Write ? 'W' : 'R',
3787 Pde.b.u1User ? 'U' : 'S',
3788 Pde.b.u1Accessed ? 'A' : '-',
3789 Pde.b.u1Dirty ? 'D' : '-',
3790 Pde.b.u1Global ? 'G' : '-',
3791 Pde.b.u1WriteThru ? "WT" : "--",
3792 Pde.b.u1CacheDisable? "CD" : "--",
3793 Pde.b.u1PAT ? "AT" : "--",
3794 Pde.u & RT_BIT_64(9) ? '1' : '0',
3795 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3796 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3797 Pde.u & X86_PDE4M_PG_MASK);
3798 else
3799 {
3800 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3801 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
3802 u32Address,
3803 Pde.n.u1Write ? 'W' : 'R',
3804 Pde.n.u1User ? 'U' : 'S',
3805 Pde.n.u1Accessed ? 'A' : '-',
3806 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3807 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3808 Pde.n.u1WriteThru ? "WT" : "--",
3809 Pde.n.u1CacheDisable? "CD" : "--",
3810 Pde.u & RT_BIT_64(9) ? '1' : '0',
3811 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3812 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3813 Pde.u & X86_PDE_PG_MASK);
3814 if (cMaxDepth >= 1)
3815 {
3816 /** @todo what about using the page pool for mapping PTs? */
3817 RTHCPHYS HCPhys = Pde.u & X86_PDE_PG_MASK;
3818 PX86PT pPT = NULL;
3819 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
3820 pPT = (PX86PT)MMPagePhys2Page(pVM, HCPhys);
3821 else
3822 {
3823 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
3824 if (u32Address - pMap->GCPtr < pMap->cb)
3825 {
3826 int iPDE = (u32Address - pMap->GCPtr) >> X86_PD_SHIFT;
3827 if (pMap->aPTs[iPDE].HCPhysPT != HCPhys)
3828 pHlp->pfnPrintf(pHlp, "%08x error! Mapping error! PT %d has HCPhysPT=%VHp not %VHp is in the PD.\n",
3829 u32Address, iPDE, pMap->aPTs[iPDE].HCPhysPT, HCPhys);
3830 pPT = pMap->aPTs[iPDE].pPTR3;
3831 }
3832 }
3833 int rc2 = VERR_INVALID_PARAMETER;
3834 if (pPT)
3835 rc2 = pgmR3DumpHierarchyHC32BitPT(pVM, pPT, u32Address, pHlp);
3836 else
3837 pHlp->pfnPrintf(pHlp, "%08x error! Page table at %#x was not found in the page pool!\n", u32Address, HCPhys);
3838 if (rc2 < rc && VBOX_SUCCESS(rc))
3839 rc = rc2;
3840 }
3841 }
3842 }
3843 }
3844
3845 return rc;
3846}
3847
3848
3849/**
3850 * Dumps a 32-bit shadow page table.
3851 *
3852 * @returns VBox status code (VINF_SUCCESS).
3853 * @param pVM The VM handle.
3854 * @param pPT Pointer to the page table.
3855 * @param u32Address The virtual address this table starts at.
3856 * @param PhysSearch Address to search for.
3857 */
3858int pgmR3DumpHierarchyGC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, RTGCPHYS PhysSearch)
3859{
3860 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3861 {
3862 X86PTE Pte = pPT->a[i];
3863 if (Pte.n.u1Present)
3864 {
3865 Log(( /*P R S A D G WT CD AT NX 4M a m d */
3866 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
3867 u32Address + (i << X86_PT_SHIFT),
3868 Pte.n.u1Write ? 'W' : 'R',
3869 Pte.n.u1User ? 'U' : 'S',
3870 Pte.n.u1Accessed ? 'A' : '-',
3871 Pte.n.u1Dirty ? 'D' : '-',
3872 Pte.n.u1Global ? 'G' : '-',
3873 Pte.n.u1WriteThru ? "WT" : "--",
3874 Pte.n.u1CacheDisable? "CD" : "--",
3875 Pte.n.u1PAT ? "AT" : "--",
3876 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3877 Pte.u & RT_BIT(10) ? '1' : '0',
3878 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
3879 Pte.u & X86_PDE_PG_MASK));
3880
3881 if ((Pte.u & X86_PDE_PG_MASK) == PhysSearch)
3882 {
3883 uint64_t fPageShw = 0;
3884 RTHCPHYS pPhysHC = 0;
3885
3886 PGMShwGetPage(pVM, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), &fPageShw, &pPhysHC);
3887 Log(("Found %VGp at %VGv -> flags=%llx\n", PhysSearch, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), fPageShw));
3888 }
3889 }
3890 }
3891 return VINF_SUCCESS;
3892}
3893
3894
3895/**
3896 * Dumps a 32-bit guest page directory and page tables.
3897 *
3898 * @returns VBox status code (VINF_SUCCESS).
3899 * @param pVM The VM handle.
3900 * @param cr3 The root of the hierarchy.
3901 * @param cr4 The CR4, PSE is currently used.
3902 * @param PhysSearch Address to search for.
3903 */
3904PGMR3DECL(int) PGMR3DumpHierarchyGC(PVM pVM, uint64_t cr3, uint64_t cr4, RTGCPHYS PhysSearch)
3905{
3906 bool fLongMode = false;
3907 const unsigned cch = fLongMode ? 16 : 8; NOREF(cch);
3908 PX86PD pPD = 0;
3909
3910 int rc = PGM_GCPHYS_2_PTR(pVM, cr3 & X86_CR3_PAGE_MASK, &pPD);
3911 if (VBOX_FAILURE(rc) || !pPD)
3912 {
3913 Log(("Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK));
3914 return VERR_INVALID_PARAMETER;
3915 }
3916
3917 Log(("cr3=%08x cr4=%08x%s\n"
3918 "%-*s P - Present\n"
3919 "%-*s | R/W - Read (0) / Write (1)\n"
3920 "%-*s | | U/S - User (1) / Supervisor (0)\n"
3921 "%-*s | | | A - Accessed\n"
3922 "%-*s | | | | D - Dirty\n"
3923 "%-*s | | | | | G - Global\n"
3924 "%-*s | | | | | | WT - Write thru\n"
3925 "%-*s | | | | | | | CD - Cache disable\n"
3926 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
3927 "%-*s | | | | | | | | | NX - No execute (K8)\n"
3928 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
3929 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
3930 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
3931 "%-*s Level | | | | | | | | | | | | Page\n"
3932 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
3933 - W U - - - -- -- -- -- -- 010 */
3934 , cr3, cr4, fLongMode ? " Long Mode" : "",
3935 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
3936 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address"));
3937
3938 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
3939 {
3940 X86PDE Pde = pPD->a[i];
3941 if (Pde.n.u1Present)
3942 {
3943 const uint32_t u32Address = i << X86_PD_SHIFT;
3944
3945 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
3946 Log(( /*P R S A D G WT CD AT NX 4M a m d */
3947 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
3948 u32Address,
3949 Pde.b.u1Write ? 'W' : 'R',
3950 Pde.b.u1User ? 'U' : 'S',
3951 Pde.b.u1Accessed ? 'A' : '-',
3952 Pde.b.u1Dirty ? 'D' : '-',
3953 Pde.b.u1Global ? 'G' : '-',
3954 Pde.b.u1WriteThru ? "WT" : "--",
3955 Pde.b.u1CacheDisable? "CD" : "--",
3956 Pde.b.u1PAT ? "AT" : "--",
3957 Pde.u & RT_BIT(9) ? '1' : '0',
3958 Pde.u & RT_BIT(10) ? '1' : '0',
3959 Pde.u & RT_BIT(11) ? '1' : '0',
3960 pgmGstGet4MBPhysPage(&pVM->pgm.s, Pde)));
3961 /** @todo PhysSearch */
3962 else
3963 {
3964 Log(( /*P R S A D G WT CD AT NX 4M a m d */
3965 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
3966 u32Address,
3967 Pde.n.u1Write ? 'W' : 'R',
3968 Pde.n.u1User ? 'U' : 'S',
3969 Pde.n.u1Accessed ? 'A' : '-',
3970 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3971 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3972 Pde.n.u1WriteThru ? "WT" : "--",
3973 Pde.n.u1CacheDisable? "CD" : "--",
3974 Pde.u & RT_BIT(9) ? '1' : '0',
3975 Pde.u & RT_BIT(10) ? '1' : '0',
3976 Pde.u & RT_BIT(11) ? '1' : '0',
3977 Pde.u & X86_PDE_PG_MASK));
3978 ////if (cMaxDepth >= 1)
3979 {
3980 /** @todo what about using the page pool for mapping PTs? */
3981 RTGCPHYS GCPhys = Pde.u & X86_PDE_PG_MASK;
3982 PX86PT pPT = NULL;
3983
3984 rc = PGM_GCPHYS_2_PTR(pVM, GCPhys, &pPT);
3985
3986 int rc2 = VERR_INVALID_PARAMETER;
3987 if (pPT)
3988 rc2 = pgmR3DumpHierarchyGC32BitPT(pVM, pPT, u32Address, PhysSearch);
3989 else
3990 Log(("%08x error! Page table at %#x was not found in the page pool!\n", u32Address, GCPhys));
3991 if (rc2 < rc && VBOX_SUCCESS(rc))
3992 rc = rc2;
3993 }
3994 }
3995 }
3996 }
3997
3998 return rc;
3999}
4000
4001
4002/**
4003 * Dumps a page table hierarchy use only physical addresses and cr4/lm flags.
4004 *
4005 * @returns VBox status code (VINF_SUCCESS).
4006 * @param pVM The VM handle.
4007 * @param cr3 The root of the hierarchy.
4008 * @param cr4 The cr4, only PAE and PSE is currently used.
4009 * @param fLongMode Set if long mode, false if not long mode.
4010 * @param cMaxDepth Number of levels to dump.
4011 * @param pHlp Pointer to the output functions.
4012 */
4013PGMR3DECL(int) PGMR3DumpHierarchyHC(PVM pVM, uint64_t cr3, uint64_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4014{
4015 if (!pHlp)
4016 pHlp = DBGFR3InfoLogHlp();
4017 if (!cMaxDepth)
4018 return VINF_SUCCESS;
4019 const unsigned cch = fLongMode ? 16 : 8;
4020 pHlp->pfnPrintf(pHlp,
4021 "cr3=%08x cr4=%08x%s\n"
4022 "%-*s P - Present\n"
4023 "%-*s | R/W - Read (0) / Write (1)\n"
4024 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4025 "%-*s | | | A - Accessed\n"
4026 "%-*s | | | | D - Dirty\n"
4027 "%-*s | | | | | G - Global\n"
4028 "%-*s | | | | | | WT - Write thru\n"
4029 "%-*s | | | | | | | CD - Cache disable\n"
4030 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4031 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4032 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4033 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4034 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4035 "%-*s Level | | | | | | | | | | | | Page\n"
4036 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4037 - W U - - - -- -- -- -- -- 010 */
4038 , cr3, cr4, fLongMode ? " Long Mode" : "",
4039 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4040 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address");
4041 if (cr4 & X86_CR4_PAE)
4042 {
4043 if (fLongMode)
4044 return pgmR3DumpHierarchyHcPaePML4(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4045 return pgmR3DumpHierarchyHCPaePDPT(pVM, cr3 & X86_CR3_PAE_PAGE_MASK, 0, cr4, false, cMaxDepth, pHlp);
4046 }
4047 return pgmR3DumpHierarchyHC32BitPD(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4048}
4049
4050
4051
4052#ifdef VBOX_WITH_DEBUGGER
4053/**
4054 * The '.pgmram' command.
4055 *
4056 * @returns VBox status.
4057 * @param pCmd Pointer to the command descriptor (as registered).
4058 * @param pCmdHlp Pointer to command helper functions.
4059 * @param pVM Pointer to the current VM (if any).
4060 * @param paArgs Pointer to (readonly) array of arguments.
4061 * @param cArgs Number of arguments in the array.
4062 */
4063static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4064{
4065 /*
4066 * Validate input.
4067 */
4068 if (!pVM)
4069 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4070 if (!pVM->pgm.s.pRamRangesGC)
4071 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no Ram is registered.\n");
4072
4073 /*
4074 * Dump the ranges.
4075 */
4076 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "From - To (incl) pvHC\n");
4077 PPGMRAMRANGE pRam;
4078 for (pRam = pVM->pgm.s.pRamRangesR3; pRam; pRam = pRam->pNextR3)
4079 {
4080 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4081 "%VGp - %VGp %p\n",
4082 pRam->GCPhys, pRam->GCPhysLast, pRam->pvHC);
4083 if (VBOX_FAILURE(rc))
4084 return rc;
4085 }
4086
4087 return VINF_SUCCESS;
4088}
4089
4090
4091/**
4092 * The '.pgmmap' command.
4093 *
4094 * @returns VBox status.
4095 * @param pCmd Pointer to the command descriptor (as registered).
4096 * @param pCmdHlp Pointer to command helper functions.
4097 * @param pVM Pointer to the current VM (if any).
4098 * @param paArgs Pointer to (readonly) array of arguments.
4099 * @param cArgs Number of arguments in the array.
4100 */
4101static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4102{
4103 /*
4104 * Validate input.
4105 */
4106 if (!pVM)
4107 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4108 if (!pVM->pgm.s.pMappingsR3)
4109 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no mappings are registered.\n");
4110
4111 /*
4112 * Print message about the fixedness of the mappings.
4113 */
4114 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, pVM->pgm.s.fMappingsFixed ? "The mappings are FIXED.\n" : "The mappings are FLOATING.\n");
4115 if (VBOX_FAILURE(rc))
4116 return rc;
4117
4118 /*
4119 * Dump the ranges.
4120 */
4121 PPGMMAPPING pCur;
4122 for (pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
4123 {
4124 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4125 "%08x - %08x %s\n",
4126 pCur->GCPtr, pCur->GCPtrLast, pCur->pszDesc);
4127 if (VBOX_FAILURE(rc))
4128 return rc;
4129 }
4130
4131 return VINF_SUCCESS;
4132}
4133
4134
4135/**
4136 * The '.pgmsync' command.
4137 *
4138 * @returns VBox status.
4139 * @param pCmd Pointer to the command descriptor (as registered).
4140 * @param pCmdHlp Pointer to command helper functions.
4141 * @param pVM Pointer to the current VM (if any).
4142 * @param paArgs Pointer to (readonly) array of arguments.
4143 * @param cArgs Number of arguments in the array.
4144 */
4145static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4146{
4147 /*
4148 * Validate input.
4149 */
4150 if (!pVM)
4151 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4152
4153 /*
4154 * Force page directory sync.
4155 */
4156 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
4157
4158 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Forcing page directory sync.\n");
4159 if (VBOX_FAILURE(rc))
4160 return rc;
4161
4162 return VINF_SUCCESS;
4163}
4164
4165
4166#ifdef VBOX_STRICT
4167/**
4168 * The '.pgmassertcr3' command.
4169 *
4170 * @returns VBox status.
4171 * @param pCmd Pointer to the command descriptor (as registered).
4172 * @param pCmdHlp Pointer to command helper functions.
4173 * @param pVM Pointer to the current VM (if any).
4174 * @param paArgs Pointer to (readonly) array of arguments.
4175 * @param cArgs Number of arguments in the array.
4176 */
4177static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4178{
4179 /*
4180 * Validate input.
4181 */
4182 if (!pVM)
4183 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4184
4185 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Checking shadow CR3 page tables for consistency.\n");
4186 if (VBOX_FAILURE(rc))
4187 return rc;
4188
4189 PGMAssertCR3(pVM, CPUMGetGuestCR3(pVM), CPUMGetGuestCR4(pVM));
4190
4191 return VINF_SUCCESS;
4192}
4193#endif
4194
4195/**
4196 * The '.pgmsyncalways' command.
4197 *
4198 * @returns VBox status.
4199 * @param pCmd Pointer to the command descriptor (as registered).
4200 * @param pCmdHlp Pointer to command helper functions.
4201 * @param pVM Pointer to the current VM (if any).
4202 * @param paArgs Pointer to (readonly) array of arguments.
4203 * @param cArgs Number of arguments in the array.
4204 */
4205static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4206{
4207 /*
4208 * Validate input.
4209 */
4210 if (!pVM)
4211 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4212
4213 /*
4214 * Force page directory sync.
4215 */
4216 if (pVM->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
4217 {
4218 ASMAtomicAndU32(&pVM->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
4219 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Disabled permanent forced page directory syncing.\n");
4220 }
4221 else
4222 {
4223 ASMAtomicOrU32(&pVM->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
4224 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
4225 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Enabled permanent forced page directory syncing.\n");
4226 }
4227}
4228
4229#endif
4230
4231/**
4232 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
4233 */
4234typedef struct PGMCHECKINTARGS
4235{
4236 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
4237 PPGMPHYSHANDLER pPrevPhys;
4238 PPGMVIRTHANDLER pPrevVirt;
4239 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
4240 PVM pVM;
4241} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
4242
4243/**
4244 * Validate a node in the physical handler tree.
4245 *
4246 * @returns 0 on if ok, other wise 1.
4247 * @param pNode The handler node.
4248 * @param pvUser pVM.
4249 */
4250static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4251{
4252 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4253 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
4254 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4255 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %VGp-%VGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4256 AssertReleaseMsg( !pArgs->pPrevPhys
4257 || (pArgs->fLeftToRight ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
4258 ("pPrevPhys=%p %VGp-%VGp %s\n"
4259 " pCur=%p %VGp-%VGp %s\n",
4260 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
4261 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4262 pArgs->pPrevPhys = pCur;
4263 return 0;
4264}
4265
4266
4267/**
4268 * Validate a node in the virtual handler tree.
4269 *
4270 * @returns 0 on if ok, other wise 1.
4271 * @param pNode The handler node.
4272 * @param pvUser pVM.
4273 */
4274static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
4275{
4276 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4277 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
4278 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4279 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %VGv-%VGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4280 AssertReleaseMsg( !pArgs->pPrevVirt
4281 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
4282 ("pPrevVirt=%p %VGv-%VGv %s\n"
4283 " pCur=%p %VGv-%VGv %s\n",
4284 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
4285 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4286 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
4287 {
4288 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
4289 ("pCur=%p %VGv-%VGv %s\n"
4290 "iPage=%d offVirtHandle=%#x expected %#x\n",
4291 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
4292 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
4293 }
4294 pArgs->pPrevVirt = pCur;
4295 return 0;
4296}
4297
4298
4299/**
4300 * Validate a node in the virtual handler tree.
4301 *
4302 * @returns 0 on if ok, other wise 1.
4303 * @param pNode The handler node.
4304 * @param pvUser pVM.
4305 */
4306static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4307{
4308 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4309 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
4310 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
4311 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
4312 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %VGp-%VGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
4313 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4314 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4315 ("pPrevPhys2Virt=%p %VGp-%VGp\n"
4316 " pCur=%p %VGp-%VGp\n",
4317 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4318 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4319 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4320 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4321 ("pPrevPhys2Virt=%p %VGp-%VGp\n"
4322 " pCur=%p %VGp-%VGp\n",
4323 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4324 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4325 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
4326 ("pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4327 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4328 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
4329 {
4330 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
4331 for (;;)
4332 {
4333 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
4334 AssertReleaseMsg(pCur2 != pCur,
4335 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4336 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4337 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
4338 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4339 "pCur2=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4340 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4341 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4342 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
4343 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4344 "pCur2=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4345 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4346 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4347 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
4348 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4349 "pCur2=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4350 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4351 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4352 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
4353 break;
4354 }
4355 }
4356
4357 pArgs->pPrevPhys2Virt = pCur;
4358 return 0;
4359}
4360
4361
4362/**
4363 * Perform an integrity check on the PGM component.
4364 *
4365 * @returns VINF_SUCCESS if everything is fine.
4366 * @returns VBox error status after asserting on integrity breach.
4367 * @param pVM The VM handle.
4368 */
4369PDMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
4370{
4371 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
4372
4373 /*
4374 * Check the trees.
4375 */
4376 int cErrors = 0;
4377 const static PGMCHECKINTARGS s_LeftToRight = { true, NULL, NULL, NULL, pVM };
4378 const static PGMCHECKINTARGS s_RightToLeft = { false, NULL, NULL, NULL, pVM };
4379 PGMCHECKINTARGS Args = s_LeftToRight;
4380 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4381 Args = s_RightToLeft;
4382 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4383 Args = s_LeftToRight;
4384 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4385 Args = s_RightToLeft;
4386 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4387 Args = s_LeftToRight;
4388 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4389 Args = s_RightToLeft;
4390 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4391 Args = s_LeftToRight;
4392 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4393 Args = s_RightToLeft;
4394 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4395
4396 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
4397}
4398
4399
4400/**
4401 * Inform PGM if we want all mappings to be put into the shadow page table. (necessary for e.g. VMX)
4402 *
4403 * @returns VBox status code.
4404 * @param pVM VM handle.
4405 * @param fEnable Enable or disable shadow mappings
4406 */
4407PGMR3DECL(int) PGMR3ChangeShwPDMappings(PVM pVM, bool fEnable)
4408{
4409 pVM->pgm.s.fDisableMappings = !fEnable;
4410
4411 uint32_t cb;
4412 int rc = PGMR3MappingsSize(pVM, &cb);
4413 AssertRCReturn(rc, rc);
4414
4415 /* Pretend the mappings are now fixed; to force a refresh of the reserved PDEs. */
4416 rc = PGMR3MappingsFix(pVM, MM_HYPER_AREA_ADDRESS, cb);
4417 AssertRCReturn(rc, rc);
4418
4419 return VINF_SUCCESS;
4420}
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette