VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/PGM.cpp@ 61128

Last change on this file since 61128 was 60401, checked in by vboxsync, 9 years ago

PGM: Converted NO_RAM_RESET into a CFGM option (PGM/ZeroRamPagesOnReset).

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 206.9 KB
Line 
1/* $Id: PGM.cpp 60401 2016-04-09 23:10:40Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2015 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18
19/** @page pg_pgm PGM - The Page Manager and Monitor
20 *
21 * @sa @ref grp_pgm
22 * @subpage pg_pgm_pool
23 * @subpage pg_pgm_phys
24 *
25 *
26 * @section sec_pgm_modes Paging Modes
27 *
28 * There are three memory contexts: Host Context (HC), Guest Context (GC)
29 * and intermediate context. When talking about paging HC can also be referred
30 * to as "host paging", and GC referred to as "shadow paging".
31 *
32 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
33 * is defined by the host operating system. The mode used in the shadow paging mode
34 * depends on the host paging mode and what the mode the guest is currently in. The
35 * following relation between the two is defined:
36 *
37 * @verbatim
38 Host > 32-bit | PAE | AMD64 |
39 Guest | | | |
40 ==v================================
41 32-bit 32-bit PAE PAE
42 -------|--------|--------|--------|
43 PAE PAE PAE PAE
44 -------|--------|--------|--------|
45 AMD64 AMD64 AMD64 AMD64
46 -------|--------|--------|--------| @endverbatim
47 *
48 * All configuration except those in the diagonal (upper left) are expected to
49 * require special effort from the switcher (i.e. a bit slower).
50 *
51 *
52 *
53 *
54 * @section sec_pgm_shw The Shadow Memory Context
55 *
56 *
57 * [..]
58 *
59 * Because of guest context mappings requires PDPT and PML4 entries to allow
60 * writing on AMD64, the two upper levels will have fixed flags whatever the
61 * guest is thinking of using there. So, when shadowing the PD level we will
62 * calculate the effective flags of PD and all the higher levels. In legacy
63 * PAE mode this only applies to the PWT and PCD bits (the rest are
64 * ignored/reserved/MBZ). We will ignore those bits for the present.
65 *
66 *
67 *
68 * @section sec_pgm_int The Intermediate Memory Context
69 *
70 * The world switch goes thru an intermediate memory context which purpose it is
71 * to provide different mappings of the switcher code. All guest mappings are also
72 * present in this context.
73 *
74 * The switcher code is mapped at the same location as on the host, at an
75 * identity mapped location (physical equals virtual address), and at the
76 * hypervisor location. The identity mapped location is for when the world
77 * switches that involves disabling paging.
78 *
79 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
80 * simplifies switching guest CPU mode and consistency at the cost of more
81 * code to do the work. All memory use for those page tables is located below
82 * 4GB (this includes page tables for guest context mappings).
83 *
84 * Note! The intermediate memory context is also used for 64-bit guest
85 * execution on 32-bit hosts. Because we need to load 64-bit registers
86 * prior to switching to guest context, we need to be in 64-bit mode
87 * first. So, HM has some 64-bit worker routines in VMMRC.rc that get
88 * invoked via the special world switcher code in LegacyToAMD64.asm.
89 *
90 *
91 * @subsection subsec_pgm_int_gc Guest Context Mappings
92 *
93 * During assignment and relocation of a guest context mapping the intermediate
94 * memory context is used to verify the new location.
95 *
96 * Guest context mappings are currently restricted to below 4GB, for reasons
97 * of simplicity. This may change when we implement AMD64 support.
98 *
99 *
100 *
101 *
102 * @section sec_pgm_misc Misc
103 *
104 *
105 * @subsection sec_pgm_misc_A20 The A20 Gate
106 *
107 * PGM implements the A20 gate masking when translating a virtual guest address
108 * into a physical address for CPU access, i.e. PGMGstGetPage (and friends) and
109 * the code reading the guest page table entries during shadowing. The masking
110 * is done consistenly for all CPU modes, paged ones included. Large pages are
111 * also masked correctly. (On current CPUs, experiments indicates that AMD does
112 * not apply A20M in paged modes and intel only does it for the 2nd MB of
113 * memory.)
114 *
115 * The A20 gate implementation is per CPU core. It can be configured on a per
116 * core basis via the keyboard device and PC architecture device. This is
117 * probably not exactly how real CPUs do it, but SMP and A20 isn't a place where
118 * guest OSes try pushing things anyway, so who cares. (On current real systems
119 * the A20M signal is probably only sent to the boot CPU and it affects all
120 * thread and probably all cores in that package.)
121 *
122 * The keyboard device and the PC architecture device doesn't OR their A20
123 * config bits together, rather they are currently implemented such that they
124 * mirror the CPU state. So, flipping the bit in either of them will change the
125 * A20 state. (On real hardware the bits of the two devices should probably be
126 * ORed together to indicate enabled, i.e. both needs to be cleared to disable
127 * A20 masking.)
128 *
129 * The A20 state will change immediately, transmeta fashion. There is no delays
130 * due to buses, wiring or other physical stuff. (On real hardware there are
131 * normally delays, the delays differs between the two devices and probably also
132 * between chipsets and CPU generations. Note that it's said that transmeta CPUs
133 * does the change immediately like us, they apparently intercept/handles the
134 * port accesses in microcode. Neat.)
135 *
136 * @sa http://en.wikipedia.org/wiki/A20_line#The_80286_and_the_high_memory_area
137 *
138 *
139 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
140 *
141 * The differences between legacy PAE and long mode PAE are:
142 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
143 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
144 * usual meanings while 6 is ignored (AMD). This means that upon switching to
145 * legacy PAE mode we'll have to clear these bits and when going to long mode
146 * they must be set. This applies to both intermediate and shadow contexts,
147 * however we don't need to do it for the intermediate one since we're
148 * executing with CR0.WP at that time.
149 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
150 * a page aligned one is required.
151 *
152 *
153 * @section sec_pgm_handlers Access Handlers
154 *
155 * Placeholder.
156 *
157 *
158 * @subsection sec_pgm_handlers_phys Physical Access Handlers
159 *
160 * Placeholder.
161 *
162 *
163 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
164 *
165 * We currently implement three types of virtual access handlers: ALL, WRITE
166 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERKIND for some more details.
167 *
168 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
169 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
170 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
171 * rest of this section is going to be about these handlers.
172 *
173 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
174 * how successful this is gonna be...
175 *
176 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
177 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
178 * and create a new node that is inserted into the AVL tree (range key). Then
179 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
180 *
181 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
182 *
183 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
184 * via the current guest CR3 and update the physical page -> virtual handler
185 * translation. Needless to say, this doesn't exactly scale very well. If any changes
186 * are detected, it will flag a virtual bit update just like we did on registration.
187 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
188 *
189 * 2b. The virtual bit update process will iterate all the pages covered by all the
190 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
191 * virtual handlers on that page.
192 *
193 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
194 * we don't miss any alias mappings of the monitored pages.
195 *
196 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
197 *
198 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
199 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
200 * will call the handlers like in the next step. If the physical mapping has
201 * changed we will - some time in the future - perform a handler callback
202 * (optional) and update the physical -> virtual handler cache.
203 *
204 * 4. \#PF(,write) on a page in the range. This will cause the handler to
205 * be invoked.
206 *
207 * 5. The guest invalidates the page and changes the physical backing or
208 * unmaps it. This should cause the invalidation callback to be invoked
209 * (it might not yet be 100% perfect). Exactly what happens next... is
210 * this where we mess up and end up out of sync for a while?
211 *
212 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
213 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
214 * this handler to NONE and trigger a full PGM resync (basically the same
215 * as int step 1). Which means 2 is executed again.
216 *
217 *
218 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
219 *
220 * There is a bunch of things that needs to be done to make the virtual handlers
221 * work 100% correctly and work more efficiently.
222 *
223 * The first bit hasn't been implemented yet because it's going to slow the
224 * whole mess down even more, and besides it seems to be working reliably for
225 * our current uses. OTOH, some of the optimizations might end up more or less
226 * implementing the missing bits, so we'll see.
227 *
228 * On the optimization side, the first thing to do is to try avoid unnecessary
229 * cache flushing. Then try team up with the shadowing code to track changes
230 * in mappings by means of access to them (shadow in), updates to shadows pages,
231 * invlpg, and shadow PT discarding (perhaps).
232 *
233 * Some idea that have popped up for optimization for current and new features:
234 * - bitmap indicating where there are virtual handlers installed.
235 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
236 * - Further optimize this by min/max (needs min/max avl getters).
237 * - Shadow page table entry bit (if any left)?
238 *
239 */
240
241
242/** @page pg_pgm_phys PGM Physical Guest Memory Management
243 *
244 *
245 * Objectives:
246 * - Guest RAM over-commitment using memory ballooning,
247 * zero pages and general page sharing.
248 * - Moving or mirroring a VM onto a different physical machine.
249 *
250 *
251 * @section sec_pgmPhys_Definitions Definitions
252 *
253 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
254 * machinery associated with it.
255 *
256 *
257 *
258 *
259 * @section sec_pgmPhys_AllocPage Allocating a page.
260 *
261 * Initially we map *all* guest memory to the (per VM) zero page, which
262 * means that none of the read functions will cause pages to be allocated.
263 *
264 * Exception, access bit in page tables that have been shared. This must
265 * be handled, but we must also make sure PGMGst*Modify doesn't make
266 * unnecessary modifications.
267 *
268 * Allocation points:
269 * - PGMPhysSimpleWriteGCPhys and PGMPhysWrite.
270 * - Replacing a zero page mapping at \#PF.
271 * - Replacing a shared page mapping at \#PF.
272 * - ROM registration (currently MMR3RomRegister).
273 * - VM restore (pgmR3Load).
274 *
275 * For the first three it would make sense to keep a few pages handy
276 * until we've reached the max memory commitment for the VM.
277 *
278 * For the ROM registration, we know exactly how many pages we need
279 * and will request these from ring-0. For restore, we will save
280 * the number of non-zero pages in the saved state and allocate
281 * them up front. This would allow the ring-0 component to refuse
282 * the request if the isn't sufficient memory available for VM use.
283 *
284 * Btw. for both ROM and restore allocations we won't be requiring
285 * zeroed pages as they are going to be filled instantly.
286 *
287 *
288 * @section sec_pgmPhys_FreePage Freeing a page
289 *
290 * There are a few points where a page can be freed:
291 * - After being replaced by the zero page.
292 * - After being replaced by a shared page.
293 * - After being ballooned by the guest additions.
294 * - At reset.
295 * - At restore.
296 *
297 * When freeing one or more pages they will be returned to the ring-0
298 * component and replaced by the zero page.
299 *
300 * The reasoning for clearing out all the pages on reset is that it will
301 * return us to the exact same state as on power on, and may thereby help
302 * us reduce the memory load on the system. Further it might have a
303 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
304 *
305 * On restore, as mention under the allocation topic, pages should be
306 * freed / allocated depending on how many is actually required by the
307 * new VM state. The simplest approach is to do like on reset, and free
308 * all non-ROM pages and then allocate what we need.
309 *
310 * A measure to prevent some fragmentation, would be to let each allocation
311 * chunk have some affinity towards the VM having allocated the most pages
312 * from it. Also, try make sure to allocate from allocation chunks that
313 * are almost full. Admittedly, both these measures might work counter to
314 * our intentions and its probably not worth putting a lot of effort,
315 * cpu time or memory into this.
316 *
317 *
318 * @section sec_pgmPhys_SharePage Sharing a page
319 *
320 * The basic idea is that there there will be a idle priority kernel
321 * thread walking the non-shared VM pages hashing them and looking for
322 * pages with the same checksum. If such pages are found, it will compare
323 * them byte-by-byte to see if they actually are identical. If found to be
324 * identical it will allocate a shared page, copy the content, check that
325 * the page didn't change while doing this, and finally request both the
326 * VMs to use the shared page instead. If the page is all zeros (special
327 * checksum and byte-by-byte check) it will request the VM that owns it
328 * to replace it with the zero page.
329 *
330 * To make this efficient, we will have to make sure not to try share a page
331 * that will change its contents soon. This part requires the most work.
332 * A simple idea would be to request the VM to write monitor the page for
333 * a while to make sure it isn't modified any time soon. Also, it may
334 * make sense to skip pages that are being write monitored since this
335 * information is readily available to the thread if it works on the
336 * per-VM guest memory structures (presently called PGMRAMRANGE).
337 *
338 *
339 * @section sec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
340 *
341 * The pages are organized in allocation chunks in ring-0, this is a necessity
342 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
343 * could easily work on a page-by-page basis if we liked. Whether this is possible
344 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
345 * become a problem as part of the idea here is that we wish to return memory to
346 * the host system.
347 *
348 * For instance, starting two VMs at the same time, they will both allocate the
349 * guest memory on-demand and if permitted their page allocations will be
350 * intermixed. Shut down one of the two VMs and it will be difficult to return
351 * any memory to the host system because the page allocation for the two VMs are
352 * mixed up in the same allocation chunks.
353 *
354 * To further complicate matters, when pages are freed because they have been
355 * ballooned or become shared/zero the whole idea is that the page is supposed
356 * to be reused by another VM or returned to the host system. This will cause
357 * allocation chunks to contain pages belonging to different VMs and prevent
358 * returning memory to the host when one of those VM shuts down.
359 *
360 * The only way to really deal with this problem is to move pages. This can
361 * either be done at VM shutdown and or by the idle priority worker thread
362 * that will be responsible for finding sharable/zero pages. The mechanisms
363 * involved for coercing a VM to move a page (or to do it for it) will be
364 * the same as when telling it to share/zero a page.
365 *
366 *
367 * @section sec_pgmPhys_Tracking Tracking Structures And Their Cost
368 *
369 * There's a difficult balance between keeping the per-page tracking structures
370 * (global and guest page) easy to use and keeping them from eating too much
371 * memory. We have limited virtual memory resources available when operating in
372 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
373 * tracking structures will be attempted designed such that we can deal with up
374 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
375 *
376 *
377 * @subsection subsec_pgmPhys_Tracking_Kernel Kernel Space
378 *
379 * @see pg_GMM
380 *
381 * @subsection subsec_pgmPhys_Tracking_PerVM Per-VM
382 *
383 * Fixed info is the physical address of the page (HCPhys) and the page id
384 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
385 * Today we've restricting ourselves to 40(-12) bits because this is the current
386 * restrictions of all AMD64 implementations (I think Barcelona will up this
387 * to 48(-12) bits, not that it really matters) and I needed the bits for
388 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
389 * decent range for the page id: 2^(28+12) = 1024TB.
390 *
391 * In additions to these, we'll have to keep maintaining the page flags as we
392 * currently do. Although it wouldn't harm to optimize these quite a bit, like
393 * for instance the ROM shouldn't depend on having a write handler installed
394 * in order for it to become read-only. A RO/RW bit should be considered so
395 * that the page syncing code doesn't have to mess about checking multiple
396 * flag combinations (ROM || RW handler || write monitored) in order to
397 * figure out how to setup a shadow PTE. But this of course, is second
398 * priority at present. Current this requires 12 bits, but could probably
399 * be optimized to ~8.
400 *
401 * Then there's the 24 bits used to track which shadow page tables are
402 * currently mapping a page for the purpose of speeding up physical
403 * access handlers, and thereby the page pool cache. More bit for this
404 * purpose wouldn't hurt IIRC.
405 *
406 * Then there is a new bit in which we need to record what kind of page
407 * this is, shared, zero, normal or write-monitored-normal. This'll
408 * require 2 bits. One bit might be needed for indicating whether a
409 * write monitored page has been written to. And yet another one or
410 * two for tracking migration status. 3-4 bits total then.
411 *
412 * Whatever is left will can be used to record the sharabilitiy of a
413 * page. The page checksum will not be stored in the per-VM table as
414 * the idle thread will not be permitted to do modifications to it.
415 * It will instead have to keep its own working set of potentially
416 * shareable pages and their check sums and stuff.
417 *
418 * For the present we'll keep the current packing of the
419 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
420 * we'll have to change it to a struct with a total of 128-bits at
421 * our disposal.
422 *
423 * The initial layout will be like this:
424 * @verbatim
425 RTHCPHYS HCPhys; The current stuff.
426 63:40 Current shadow PT tracking stuff.
427 39:12 The physical page frame number.
428 11:0 The current flags.
429 uint32_t u28PageId : 28; The page id.
430 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
431 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
432 uint32_t u1Reserved : 1; Reserved for later.
433 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
434 @endverbatim
435 *
436 * The final layout will be something like this:
437 * @verbatim
438 RTHCPHYS HCPhys; The current stuff.
439 63:48 High page id (12+).
440 47:12 The physical page frame number.
441 11:0 Low page id.
442 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
443 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
444 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
445 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
446 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
447 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
448 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
449 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
450 @endverbatim
451 *
452 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
453 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
454 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
455 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
456 *
457 * A couple of cost examples for the total cost per-VM + kernel.
458 * 32-bit Windows and 32-bit linux:
459 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
460 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
461 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
462 * 64-bit Windows and 64-bit linux:
463 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
464 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
465 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
466 *
467 * UPDATE - 2007-09-27:
468 * Will need a ballooned flag/state too because we cannot
469 * trust the guest 100% and reporting the same page as ballooned more
470 * than once will put the GMM off balance.
471 *
472 *
473 * @section sec_pgmPhys_Serializing Serializing Access
474 *
475 * Initially, we'll try a simple scheme:
476 *
477 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
478 * by the EMT thread of that VM while in the pgm critsect.
479 * - Other threads in the VM process that needs to make reliable use of
480 * the per-VM RAM tracking structures will enter the critsect.
481 * - No process external thread or kernel thread will ever try enter
482 * the pgm critical section, as that just won't work.
483 * - The idle thread (and similar threads) doesn't not need 100% reliable
484 * data when performing it tasks as the EMT thread will be the one to
485 * do the actual changes later anyway. So, as long as it only accesses
486 * the main ram range, it can do so by somehow preventing the VM from
487 * being destroyed while it works on it...
488 *
489 * - The over-commitment management, including the allocating/freeing
490 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
491 * more mundane mutex implementation is broken on Linux).
492 * - A separate mutex is protecting the set of allocation chunks so
493 * that pages can be shared or/and freed up while some other VM is
494 * allocating more chunks. This mutex can be take from under the other
495 * one, but not the other way around.
496 *
497 *
498 * @section sec_pgmPhys_Request VM Request interface
499 *
500 * When in ring-0 it will become necessary to send requests to a VM so it can
501 * for instance move a page while defragmenting during VM destroy. The idle
502 * thread will make use of this interface to request VMs to setup shared
503 * pages and to perform write monitoring of pages.
504 *
505 * I would propose an interface similar to the current VMReq interface, similar
506 * in that it doesn't require locking and that the one sending the request may
507 * wait for completion if it wishes to. This shouldn't be very difficult to
508 * realize.
509 *
510 * The requests themselves are also pretty simple. They are basically:
511 * -# Check that some precondition is still true.
512 * -# Do the update.
513 * -# Update all shadow page tables involved with the page.
514 *
515 * The 3rd step is identical to what we're already doing when updating a
516 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
517 *
518 *
519 *
520 * @section sec_pgmPhys_MappingCaches Mapping Caches
521 *
522 * In order to be able to map in and out memory and to be able to support
523 * guest with more RAM than we've got virtual address space, we'll employing
524 * a mapping cache. Normally ring-0 and ring-3 can share the same cache,
525 * however on 32-bit darwin the ring-0 code is running in a different memory
526 * context and therefore needs a separate cache. In raw-mode context we also
527 * need a separate cache. The 32-bit darwin mapping cache and the one for
528 * raw-mode context share a lot of code, see PGMRZDYNMAP.
529 *
530 *
531 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
532 *
533 * We've considered implementing the ring-3 mapping cache page based but found
534 * that this was bother some when one had to take into account TLBs+SMP and
535 * portability (missing the necessary APIs on several platforms). There were
536 * also some performance concerns with this approach which hadn't quite been
537 * worked out.
538 *
539 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
540 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
541 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
542 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
543 * costly than a single page, although how much more costly is uncertain. We'll
544 * try address this by using a very big cache, preferably bigger than the actual
545 * VM RAM size if possible. The current VM RAM sizes should give some idea for
546 * 32-bit boxes, while on 64-bit we can probably get away with employing an
547 * unlimited cache.
548 *
549 * The cache have to parts, as already indicated, the ring-3 side and the
550 * ring-0 side.
551 *
552 * The ring-0 will be tied to the page allocator since it will operate on the
553 * memory objects it contains. It will therefore require the first ring-0 mutex
554 * discussed in @ref sec_pgmPhys_Serializing. We some double house keeping wrt
555 * to who has mapped what I think, since both VMMR0.r0 and RTR0MemObj will keep
556 * track of mapping relations
557 *
558 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
559 * require anyone that desires to do changes to the mapping cache to do that
560 * from within this critsect. Alternatively, we could employ a separate critsect
561 * for serializing changes to the mapping cache as this would reduce potential
562 * contention with other threads accessing mappings unrelated to the changes
563 * that are in process. We can see about this later, contention will show
564 * up in the statistics anyway, so it'll be simple to tell.
565 *
566 * The organization of the ring-3 part will be very much like how the allocation
567 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
568 * having to walk the tree all the time, we'll have a couple of lookaside entries
569 * like in we do for I/O ports and MMIO in IOM.
570 *
571 * The simplified flow of a PGMPhysRead/Write function:
572 * -# Enter the PGM critsect.
573 * -# Lookup GCPhys in the ram ranges and get the Page ID.
574 * -# Calc the Allocation Chunk ID from the Page ID.
575 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
576 * If not found in cache:
577 * -# Call ring-0 and request it to be mapped and supply
578 * a chunk to be unmapped if the cache is maxed out already.
579 * -# Insert the new mapping into the AVL tree (id + R3 address).
580 * -# Update the relevant lookaside entry and return the mapping address.
581 * -# Do the read/write according to monitoring flags and everything.
582 * -# Leave the critsect.
583 *
584 *
585 * @section sec_pgmPhys_Fallback Fallback
586 *
587 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
588 * API and thus require a fallback.
589 *
590 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
591 * will return to the ring-3 caller (and later ring-0) and asking it to seed
592 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
593 * then perform an SUPR3PageAlloc(cbChunk >> PAGE_SHIFT) call and make a
594 * "SeededAllocPages" call to ring-0.
595 *
596 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
597 * all page sharing (zero page detection will continue). It will also force
598 * all allocations to come from the VM which seeded the page. Both these
599 * measures are taken to make sure that there will never be any need for
600 * mapping anything into ring-3 - everything will be mapped already.
601 *
602 * Whether we'll continue to use the current MM locked memory management
603 * for this I don't quite know (I'd prefer not to and just ditch that all
604 * together), we'll see what's simplest to do.
605 *
606 *
607 *
608 * @section sec_pgmPhys_Changes Changes
609 *
610 * Breakdown of the changes involved?
611 */
612
613
614/*********************************************************************************************************************************
615* Header Files *
616*********************************************************************************************************************************/
617#define LOG_GROUP LOG_GROUP_PGM
618#include <VBox/vmm/dbgf.h>
619#include <VBox/vmm/pgm.h>
620#include <VBox/vmm/cpum.h>
621#include <VBox/vmm/iom.h>
622#include <VBox/sup.h>
623#include <VBox/vmm/mm.h>
624#include <VBox/vmm/em.h>
625#include <VBox/vmm/stam.h>
626#ifdef VBOX_WITH_REM
627# include <VBox/vmm/rem.h>
628#endif
629#include <VBox/vmm/selm.h>
630#include <VBox/vmm/ssm.h>
631#include <VBox/vmm/hm.h>
632#include "PGMInternal.h"
633#include <VBox/vmm/vm.h>
634#include <VBox/vmm/uvm.h>
635#include "PGMInline.h"
636
637#include <VBox/dbg.h>
638#include <VBox/param.h>
639#include <VBox/err.h>
640
641#include <iprt/asm.h>
642#include <iprt/asm-amd64-x86.h>
643#include <iprt/assert.h>
644#include <iprt/env.h>
645#include <iprt/mem.h>
646#include <iprt/file.h>
647#include <iprt/string.h>
648#include <iprt/thread.h>
649
650
651/*********************************************************************************************************************************
652* Structures and Typedefs *
653*********************************************************************************************************************************/
654/**
655 * Argument package for pgmR3RElocatePhysHnadler, pgmR3RelocateVirtHandler and
656 * pgmR3RelocateHyperVirtHandler.
657 */
658typedef struct PGMRELOCHANDLERARGS
659{
660 RTGCINTPTR offDelta;
661 PVM pVM;
662} PGMRELOCHANDLERARGS;
663/** Pointer to a page access handlere relocation argument package. */
664typedef PGMRELOCHANDLERARGS const *PCPGMRELOCHANDLERARGS;
665
666
667/*********************************************************************************************************************************
668* Internal Functions *
669*********************************************************************************************************************************/
670static int pgmR3InitPaging(PVM pVM);
671static int pgmR3InitStats(PVM pVM);
672static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
673static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
674static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
675static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
676#ifdef VBOX_WITH_RAW_MODE
677static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
678static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
679#endif /* VBOX_WITH_RAW_MODE */
680#ifdef VBOX_STRICT
681static FNVMATSTATE pgmR3ResetNoMorePhysWritesFlag;
682#endif
683static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
684static void pgmR3ModeDataSwitch(PVM pVM, PVMCPU pVCpu, PGMMODE enmShw, PGMMODE enmGst);
685static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
686
687#ifdef VBOX_WITH_DEBUGGER
688static FNDBGCCMD pgmR3CmdError;
689static FNDBGCCMD pgmR3CmdSync;
690static FNDBGCCMD pgmR3CmdSyncAlways;
691# ifdef VBOX_STRICT
692static FNDBGCCMD pgmR3CmdAssertCR3;
693# endif
694static FNDBGCCMD pgmR3CmdPhysToFile;
695#endif
696
697
698/*********************************************************************************************************************************
699* Global Variables *
700*********************************************************************************************************************************/
701#ifdef VBOX_WITH_DEBUGGER
702/** Argument descriptors for '.pgmerror' and '.pgmerroroff'. */
703static const DBGCVARDESC g_aPgmErrorArgs[] =
704{
705 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
706 { 0, 1, DBGCVAR_CAT_STRING, 0, "where", "Error injection location." },
707};
708
709static const DBGCVARDESC g_aPgmPhysToFileArgs[] =
710{
711 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
712 { 1, 1, DBGCVAR_CAT_STRING, 0, "file", "The file name." },
713 { 0, 1, DBGCVAR_CAT_STRING, 0, "nozero", "If present, zero pages are skipped." },
714};
715
716# ifdef DEBUG_sandervl
717static const DBGCVARDESC g_aPgmCountPhysWritesArgs[] =
718{
719 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
720 { 1, 1, DBGCVAR_CAT_STRING, 0, "enabled", "on/off." },
721 { 1, 1, DBGCVAR_CAT_NUMBER_NO_RANGE, 0, "interval", "Interval in ms." },
722};
723# endif
724
725/** Command descriptors. */
726static const DBGCCMD g_aCmds[] =
727{
728 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, fFlags, pfnHandler pszSyntax, ....pszDescription */
729 { "pgmsync", 0, 0, NULL, 0, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
730 { "pgmerror", 0, 1, &g_aPgmErrorArgs[0], 1, 0, pgmR3CmdError, "", "Enables inject runtime of errors into parts of PGM." },
731 { "pgmerroroff", 0, 1, &g_aPgmErrorArgs[0], 1, 0, pgmR3CmdError, "", "Disables inject runtime errors into parts of PGM." },
732# ifdef VBOX_STRICT
733 { "pgmassertcr3", 0, 0, NULL, 0, 0, pgmR3CmdAssertCR3, "", "Check the shadow CR3 mapping." },
734# ifdef VBOX_WITH_PAGE_SHARING
735 { "pgmcheckduppages", 0, 0, NULL, 0, 0, pgmR3CmdCheckDuplicatePages, "", "Check for duplicate pages in all running VMs." },
736 { "pgmsharedmodules", 0, 0, NULL, 0, 0, pgmR3CmdShowSharedModules, "", "Print shared modules info." },
737# endif
738# endif
739 { "pgmsyncalways", 0, 0, NULL, 0, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
740 { "pgmphystofile", 1, 2, &g_aPgmPhysToFileArgs[0], 2, 0, pgmR3CmdPhysToFile, "", "Save the physical memory to file." },
741};
742#endif
743
744
745
746
747/*
748 * Shadow - 32-bit mode
749 */
750#define PGM_SHW_TYPE PGM_TYPE_32BIT
751#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
752#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_32BIT_STR(name)
753#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
754#include "PGMShw.h"
755
756/* Guest - real mode */
757#define PGM_GST_TYPE PGM_TYPE_REAL
758#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
759#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
760#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
761#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
762#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_REAL_STR(name)
763#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
764#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
765#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
766#include "PGMBth.h"
767#include "PGMGstDefs.h"
768#include "PGMGst.h"
769#undef BTH_PGMPOOLKIND_PT_FOR_PT
770#undef BTH_PGMPOOLKIND_ROOT
771#undef PGM_BTH_NAME
772#undef PGM_BTH_NAME_RC_STR
773#undef PGM_BTH_NAME_R0_STR
774#undef PGM_GST_TYPE
775#undef PGM_GST_NAME
776#undef PGM_GST_NAME_RC_STR
777#undef PGM_GST_NAME_R0_STR
778
779/* Guest - protected mode */
780#define PGM_GST_TYPE PGM_TYPE_PROT
781#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
782#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
783#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
784#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
785#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_PROT_STR(name)
786#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
787#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
788#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
789#include "PGMBth.h"
790#include "PGMGstDefs.h"
791#include "PGMGst.h"
792#undef BTH_PGMPOOLKIND_PT_FOR_PT
793#undef BTH_PGMPOOLKIND_ROOT
794#undef PGM_BTH_NAME
795#undef PGM_BTH_NAME_RC_STR
796#undef PGM_BTH_NAME_R0_STR
797#undef PGM_GST_TYPE
798#undef PGM_GST_NAME
799#undef PGM_GST_NAME_RC_STR
800#undef PGM_GST_NAME_R0_STR
801
802/* Guest - 32-bit mode */
803#define PGM_GST_TYPE PGM_TYPE_32BIT
804#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
805#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
806#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
807#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
808#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_32BIT_STR(name)
809#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
810#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
811#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
812#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD
813#include "PGMBth.h"
814#include "PGMGstDefs.h"
815#include "PGMGst.h"
816#undef BTH_PGMPOOLKIND_PT_FOR_BIG
817#undef BTH_PGMPOOLKIND_PT_FOR_PT
818#undef BTH_PGMPOOLKIND_ROOT
819#undef PGM_BTH_NAME
820#undef PGM_BTH_NAME_RC_STR
821#undef PGM_BTH_NAME_R0_STR
822#undef PGM_GST_TYPE
823#undef PGM_GST_NAME
824#undef PGM_GST_NAME_RC_STR
825#undef PGM_GST_NAME_R0_STR
826
827#undef PGM_SHW_TYPE
828#undef PGM_SHW_NAME
829#undef PGM_SHW_NAME_RC_STR
830#undef PGM_SHW_NAME_R0_STR
831
832
833/*
834 * Shadow - PAE mode
835 */
836#define PGM_SHW_TYPE PGM_TYPE_PAE
837#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
838#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_PAE_STR(name)
839#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
840#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
841#include "PGMShw.h"
842
843/* Guest - real mode */
844#define PGM_GST_TYPE PGM_TYPE_REAL
845#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
846#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
847#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
848#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
849#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_REAL_STR(name)
850#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
851#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
852#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
853#include "PGMGstDefs.h"
854#include "PGMBth.h"
855#undef BTH_PGMPOOLKIND_PT_FOR_PT
856#undef BTH_PGMPOOLKIND_ROOT
857#undef PGM_BTH_NAME
858#undef PGM_BTH_NAME_RC_STR
859#undef PGM_BTH_NAME_R0_STR
860#undef PGM_GST_TYPE
861#undef PGM_GST_NAME
862#undef PGM_GST_NAME_RC_STR
863#undef PGM_GST_NAME_R0_STR
864
865/* Guest - protected mode */
866#define PGM_GST_TYPE PGM_TYPE_PROT
867#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
868#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
869#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
870#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
871#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PROT_STR(name)
872#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
873#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
874#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
875#include "PGMGstDefs.h"
876#include "PGMBth.h"
877#undef BTH_PGMPOOLKIND_PT_FOR_PT
878#undef BTH_PGMPOOLKIND_ROOT
879#undef PGM_BTH_NAME
880#undef PGM_BTH_NAME_RC_STR
881#undef PGM_BTH_NAME_R0_STR
882#undef PGM_GST_TYPE
883#undef PGM_GST_NAME
884#undef PGM_GST_NAME_RC_STR
885#undef PGM_GST_NAME_R0_STR
886
887/* Guest - 32-bit mode */
888#define PGM_GST_TYPE PGM_TYPE_32BIT
889#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
890#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
891#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
892#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
893#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_32BIT_STR(name)
894#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
895#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
896#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
897#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_FOR_32BIT
898#include "PGMGstDefs.h"
899#include "PGMBth.h"
900#undef BTH_PGMPOOLKIND_PT_FOR_BIG
901#undef BTH_PGMPOOLKIND_PT_FOR_PT
902#undef BTH_PGMPOOLKIND_ROOT
903#undef PGM_BTH_NAME
904#undef PGM_BTH_NAME_RC_STR
905#undef PGM_BTH_NAME_R0_STR
906#undef PGM_GST_TYPE
907#undef PGM_GST_NAME
908#undef PGM_GST_NAME_RC_STR
909#undef PGM_GST_NAME_R0_STR
910
911/* Guest - PAE mode */
912#define PGM_GST_TYPE PGM_TYPE_PAE
913#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
914#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
915#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
916#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
917#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PAE_STR(name)
918#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
919#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
920#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
921#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT
922#include "PGMBth.h"
923#include "PGMGstDefs.h"
924#include "PGMGst.h"
925#undef BTH_PGMPOOLKIND_PT_FOR_BIG
926#undef BTH_PGMPOOLKIND_PT_FOR_PT
927#undef BTH_PGMPOOLKIND_ROOT
928#undef PGM_BTH_NAME
929#undef PGM_BTH_NAME_RC_STR
930#undef PGM_BTH_NAME_R0_STR
931#undef PGM_GST_TYPE
932#undef PGM_GST_NAME
933#undef PGM_GST_NAME_RC_STR
934#undef PGM_GST_NAME_R0_STR
935
936#undef PGM_SHW_TYPE
937#undef PGM_SHW_NAME
938#undef PGM_SHW_NAME_RC_STR
939#undef PGM_SHW_NAME_R0_STR
940
941
942/*
943 * Shadow - AMD64 mode
944 */
945#define PGM_SHW_TYPE PGM_TYPE_AMD64
946#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
947#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_AMD64_STR(name)
948#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
949#include "PGMShw.h"
950
951#ifdef VBOX_WITH_64_BITS_GUESTS
952/* Guest - AMD64 mode */
953# define PGM_GST_TYPE PGM_TYPE_AMD64
954# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
955# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
956# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
957# define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
958# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_AMD64_AMD64_STR(name)
959# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
960# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
961# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
962# define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_64BIT_PML4
963# include "PGMBth.h"
964# include "PGMGstDefs.h"
965# include "PGMGst.h"
966# undef BTH_PGMPOOLKIND_PT_FOR_BIG
967# undef BTH_PGMPOOLKIND_PT_FOR_PT
968# undef BTH_PGMPOOLKIND_ROOT
969# undef PGM_BTH_NAME
970# undef PGM_BTH_NAME_RC_STR
971# undef PGM_BTH_NAME_R0_STR
972# undef PGM_GST_TYPE
973# undef PGM_GST_NAME
974# undef PGM_GST_NAME_RC_STR
975# undef PGM_GST_NAME_R0_STR
976#endif /* VBOX_WITH_64_BITS_GUESTS */
977
978#undef PGM_SHW_TYPE
979#undef PGM_SHW_NAME
980#undef PGM_SHW_NAME_RC_STR
981#undef PGM_SHW_NAME_R0_STR
982
983
984/*
985 * Shadow - Nested paging mode
986 */
987#define PGM_SHW_TYPE PGM_TYPE_NESTED
988#define PGM_SHW_NAME(name) PGM_SHW_NAME_NESTED(name)
989#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_NESTED_STR(name)
990#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_NESTED_STR(name)
991#include "PGMShw.h"
992
993/* Guest - real mode */
994#define PGM_GST_TYPE PGM_TYPE_REAL
995#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
996#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
997#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
998#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_REAL(name)
999#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_REAL_STR(name)
1000#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_REAL_STR(name)
1001#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1002#include "PGMGstDefs.h"
1003#include "PGMBth.h"
1004#undef BTH_PGMPOOLKIND_PT_FOR_PT
1005#undef PGM_BTH_NAME
1006#undef PGM_BTH_NAME_RC_STR
1007#undef PGM_BTH_NAME_R0_STR
1008#undef PGM_GST_TYPE
1009#undef PGM_GST_NAME
1010#undef PGM_GST_NAME_RC_STR
1011#undef PGM_GST_NAME_R0_STR
1012
1013/* Guest - protected mode */
1014#define PGM_GST_TYPE PGM_TYPE_PROT
1015#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1016#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1017#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1018#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PROT(name)
1019#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PROT_STR(name)
1020#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PROT_STR(name)
1021#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1022#include "PGMGstDefs.h"
1023#include "PGMBth.h"
1024#undef BTH_PGMPOOLKIND_PT_FOR_PT
1025#undef PGM_BTH_NAME
1026#undef PGM_BTH_NAME_RC_STR
1027#undef PGM_BTH_NAME_R0_STR
1028#undef PGM_GST_TYPE
1029#undef PGM_GST_NAME
1030#undef PGM_GST_NAME_RC_STR
1031#undef PGM_GST_NAME_R0_STR
1032
1033/* Guest - 32-bit mode */
1034#define PGM_GST_TYPE PGM_TYPE_32BIT
1035#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1036#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1037#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1038#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_32BIT(name)
1039#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_32BIT_STR(name)
1040#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_32BIT_STR(name)
1041#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1042#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1043#include "PGMGstDefs.h"
1044#include "PGMBth.h"
1045#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1046#undef BTH_PGMPOOLKIND_PT_FOR_PT
1047#undef PGM_BTH_NAME
1048#undef PGM_BTH_NAME_RC_STR
1049#undef PGM_BTH_NAME_R0_STR
1050#undef PGM_GST_TYPE
1051#undef PGM_GST_NAME
1052#undef PGM_GST_NAME_RC_STR
1053#undef PGM_GST_NAME_R0_STR
1054
1055/* Guest - PAE mode */
1056#define PGM_GST_TYPE PGM_TYPE_PAE
1057#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1058#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1059#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1060#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PAE(name)
1061#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PAE_STR(name)
1062#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PAE_STR(name)
1063#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1064#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1065#include "PGMGstDefs.h"
1066#include "PGMBth.h"
1067#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1068#undef BTH_PGMPOOLKIND_PT_FOR_PT
1069#undef PGM_BTH_NAME
1070#undef PGM_BTH_NAME_RC_STR
1071#undef PGM_BTH_NAME_R0_STR
1072#undef PGM_GST_TYPE
1073#undef PGM_GST_NAME
1074#undef PGM_GST_NAME_RC_STR
1075#undef PGM_GST_NAME_R0_STR
1076
1077#ifdef VBOX_WITH_64_BITS_GUESTS
1078/* Guest - AMD64 mode */
1079# define PGM_GST_TYPE PGM_TYPE_AMD64
1080# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1081# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1082# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1083# define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_AMD64(name)
1084# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_AMD64_STR(name)
1085# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_AMD64_STR(name)
1086# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1087# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1088# include "PGMGstDefs.h"
1089# include "PGMBth.h"
1090# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1091# undef BTH_PGMPOOLKIND_PT_FOR_PT
1092# undef PGM_BTH_NAME
1093# undef PGM_BTH_NAME_RC_STR
1094# undef PGM_BTH_NAME_R0_STR
1095# undef PGM_GST_TYPE
1096# undef PGM_GST_NAME
1097# undef PGM_GST_NAME_RC_STR
1098# undef PGM_GST_NAME_R0_STR
1099#endif /* VBOX_WITH_64_BITS_GUESTS */
1100
1101#undef PGM_SHW_TYPE
1102#undef PGM_SHW_NAME
1103#undef PGM_SHW_NAME_RC_STR
1104#undef PGM_SHW_NAME_R0_STR
1105
1106
1107/*
1108 * Shadow - EPT
1109 */
1110#define PGM_SHW_TYPE PGM_TYPE_EPT
1111#define PGM_SHW_NAME(name) PGM_SHW_NAME_EPT(name)
1112#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_EPT_STR(name)
1113#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_EPT_STR(name)
1114#include "PGMShw.h"
1115
1116/* Guest - real mode */
1117#define PGM_GST_TYPE PGM_TYPE_REAL
1118#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
1119#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
1120#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
1121#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_REAL(name)
1122#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_REAL_STR(name)
1123#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_REAL_STR(name)
1124#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1125#include "PGMGstDefs.h"
1126#include "PGMBth.h"
1127#undef BTH_PGMPOOLKIND_PT_FOR_PT
1128#undef PGM_BTH_NAME
1129#undef PGM_BTH_NAME_RC_STR
1130#undef PGM_BTH_NAME_R0_STR
1131#undef PGM_GST_TYPE
1132#undef PGM_GST_NAME
1133#undef PGM_GST_NAME_RC_STR
1134#undef PGM_GST_NAME_R0_STR
1135
1136/* Guest - protected mode */
1137#define PGM_GST_TYPE PGM_TYPE_PROT
1138#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1139#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1140#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1141#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PROT(name)
1142#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PROT_STR(name)
1143#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PROT_STR(name)
1144#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1145#include "PGMGstDefs.h"
1146#include "PGMBth.h"
1147#undef BTH_PGMPOOLKIND_PT_FOR_PT
1148#undef PGM_BTH_NAME
1149#undef PGM_BTH_NAME_RC_STR
1150#undef PGM_BTH_NAME_R0_STR
1151#undef PGM_GST_TYPE
1152#undef PGM_GST_NAME
1153#undef PGM_GST_NAME_RC_STR
1154#undef PGM_GST_NAME_R0_STR
1155
1156/* Guest - 32-bit mode */
1157#define PGM_GST_TYPE PGM_TYPE_32BIT
1158#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1159#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1160#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1161#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_32BIT(name)
1162#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_32BIT_STR(name)
1163#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_32BIT_STR(name)
1164#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1165#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1166#include "PGMGstDefs.h"
1167#include "PGMBth.h"
1168#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1169#undef BTH_PGMPOOLKIND_PT_FOR_PT
1170#undef PGM_BTH_NAME
1171#undef PGM_BTH_NAME_RC_STR
1172#undef PGM_BTH_NAME_R0_STR
1173#undef PGM_GST_TYPE
1174#undef PGM_GST_NAME
1175#undef PGM_GST_NAME_RC_STR
1176#undef PGM_GST_NAME_R0_STR
1177
1178/* Guest - PAE mode */
1179#define PGM_GST_TYPE PGM_TYPE_PAE
1180#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1181#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1182#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1183#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PAE(name)
1184#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PAE_STR(name)
1185#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PAE_STR(name)
1186#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1187#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1188#include "PGMGstDefs.h"
1189#include "PGMBth.h"
1190#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1191#undef BTH_PGMPOOLKIND_PT_FOR_PT
1192#undef PGM_BTH_NAME
1193#undef PGM_BTH_NAME_RC_STR
1194#undef PGM_BTH_NAME_R0_STR
1195#undef PGM_GST_TYPE
1196#undef PGM_GST_NAME
1197#undef PGM_GST_NAME_RC_STR
1198#undef PGM_GST_NAME_R0_STR
1199
1200#ifdef VBOX_WITH_64_BITS_GUESTS
1201/* Guest - AMD64 mode */
1202# define PGM_GST_TYPE PGM_TYPE_AMD64
1203# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1204# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1205# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1206# define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_AMD64(name)
1207# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_AMD64_STR(name)
1208# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_AMD64_STR(name)
1209# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1210# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1211# include "PGMGstDefs.h"
1212# include "PGMBth.h"
1213# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1214# undef BTH_PGMPOOLKIND_PT_FOR_PT
1215# undef PGM_BTH_NAME
1216# undef PGM_BTH_NAME_RC_STR
1217# undef PGM_BTH_NAME_R0_STR
1218# undef PGM_GST_TYPE
1219# undef PGM_GST_NAME
1220# undef PGM_GST_NAME_RC_STR
1221# undef PGM_GST_NAME_R0_STR
1222#endif /* VBOX_WITH_64_BITS_GUESTS */
1223
1224#undef PGM_SHW_TYPE
1225#undef PGM_SHW_NAME
1226#undef PGM_SHW_NAME_RC_STR
1227#undef PGM_SHW_NAME_R0_STR
1228
1229
1230
1231/**
1232 * Initiates the paging of VM.
1233 *
1234 * @returns VBox status code.
1235 * @param pVM The cross context VM structure.
1236 */
1237VMMR3DECL(int) PGMR3Init(PVM pVM)
1238{
1239 LogFlow(("PGMR3Init:\n"));
1240 PCFGMNODE pCfgPGM = CFGMR3GetChild(CFGMR3GetRoot(pVM), "/PGM");
1241 int rc;
1242
1243 /*
1244 * Assert alignment and sizes.
1245 */
1246 AssertCompile(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
1247 AssertCompile(sizeof(pVM->aCpus[0].pgm.s) <= sizeof(pVM->aCpus[0].pgm.padding));
1248 AssertCompileMemberAlignment(PGM, CritSectX, sizeof(uintptr_t));
1249
1250 /*
1251 * Init the structure.
1252 */
1253 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
1254 pVM->pgm.s.offVCpuPGM = RT_OFFSETOF(VMCPU, pgm.s);
1255 /*pVM->pgm.s.fRestoreRomPagesAtReset = false;*/
1256
1257 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aHandyPages); i++)
1258 {
1259 pVM->pgm.s.aHandyPages[i].HCPhysGCPhys = NIL_RTHCPHYS;
1260 pVM->pgm.s.aHandyPages[i].idPage = NIL_GMM_PAGEID;
1261 pVM->pgm.s.aHandyPages[i].idSharedPage = NIL_GMM_PAGEID;
1262 }
1263
1264 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aLargeHandyPage); i++)
1265 {
1266 pVM->pgm.s.aLargeHandyPage[i].HCPhysGCPhys = NIL_RTHCPHYS;
1267 pVM->pgm.s.aLargeHandyPage[i].idPage = NIL_GMM_PAGEID;
1268 pVM->pgm.s.aLargeHandyPage[i].idSharedPage = NIL_GMM_PAGEID;
1269 }
1270
1271 /* Init the per-CPU part. */
1272 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1273 {
1274 PVMCPU pVCpu = &pVM->aCpus[idCpu];
1275 PPGMCPU pPGM = &pVCpu->pgm.s;
1276
1277 pPGM->offVM = (uintptr_t)&pVCpu->pgm.s - (uintptr_t)pVM;
1278 pPGM->offVCpu = RT_OFFSETOF(VMCPU, pgm.s);
1279 pPGM->offPGM = (uintptr_t)&pVCpu->pgm.s - (uintptr_t)&pVM->pgm.s;
1280
1281 pPGM->enmShadowMode = PGMMODE_INVALID;
1282 pPGM->enmGuestMode = PGMMODE_INVALID;
1283
1284 pPGM->GCPhysCR3 = NIL_RTGCPHYS;
1285
1286 pPGM->pGst32BitPdR3 = NULL;
1287 pPGM->pGstPaePdptR3 = NULL;
1288 pPGM->pGstAmd64Pml4R3 = NULL;
1289#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1290 pPGM->pGst32BitPdR0 = NIL_RTR0PTR;
1291 pPGM->pGstPaePdptR0 = NIL_RTR0PTR;
1292 pPGM->pGstAmd64Pml4R0 = NIL_RTR0PTR;
1293#endif
1294 pPGM->pGst32BitPdRC = NIL_RTRCPTR;
1295 pPGM->pGstPaePdptRC = NIL_RTRCPTR;
1296 for (unsigned i = 0; i < RT_ELEMENTS(pVCpu->pgm.s.apGstPaePDsR3); i++)
1297 {
1298 pPGM->apGstPaePDsR3[i] = NULL;
1299#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1300 pPGM->apGstPaePDsR0[i] = NIL_RTR0PTR;
1301#endif
1302 pPGM->apGstPaePDsRC[i] = NIL_RTRCPTR;
1303 pPGM->aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
1304 pPGM->aGstPaePdpeRegs[i].u = UINT64_MAX;
1305 pPGM->aGCPhysGstPaePDsMonitored[i] = NIL_RTGCPHYS;
1306 }
1307
1308 pPGM->fA20Enabled = true;
1309 pPGM->GCPhysA20Mask = ~((RTGCPHYS)!pPGM->fA20Enabled << 20);
1310 }
1311
1312 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1313 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1; /* default; checked later */
1314 pVM->pgm.s.GCPtrPrevRamRangeMapping = MM_HYPER_AREA_ADDRESS;
1315
1316 rc = CFGMR3QueryBoolDef(CFGMR3GetRoot(pVM), "RamPreAlloc", &pVM->pgm.s.fRamPreAlloc,
1317#ifdef VBOX_WITH_PREALLOC_RAM_BY_DEFAULT
1318 true
1319#else
1320 false
1321#endif
1322 );
1323 AssertLogRelRCReturn(rc, rc);
1324
1325#if HC_ARCH_BITS == 32
1326# ifdef RT_OS_DARWIN
1327 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, _1G / GMM_CHUNK_SIZE * 3);
1328# else
1329 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, _1G / GMM_CHUNK_SIZE);
1330# endif
1331#else
1332 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, UINT32_MAX);
1333#endif
1334 AssertLogRelRCReturn(rc, rc);
1335 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->pgm.s.ChunkR3Map.Tlb.aEntries); i++)
1336 pVM->pgm.s.ChunkR3Map.Tlb.aEntries[i].idChunk = NIL_GMM_CHUNKID;
1337
1338 /*
1339 * Get the configured RAM size - to estimate saved state size.
1340 */
1341 uint64_t cbRam;
1342 rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
1343 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
1344 cbRam = 0;
1345 else if (RT_SUCCESS(rc))
1346 {
1347 if (cbRam < PAGE_SIZE)
1348 cbRam = 0;
1349 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
1350 }
1351 else
1352 {
1353 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Rrc.\n", rc));
1354 return rc;
1355 }
1356
1357 /*
1358 * Check for PCI pass-through and other configurables.
1359 */
1360 rc = CFGMR3QueryBoolDef(pCfgPGM, "PciPassThrough", &pVM->pgm.s.fPciPassthrough, false);
1361 AssertMsgRCReturn(rc, ("Configuration error: Failed to query integer \"PciPassThrough\", rc=%Rrc.\n", rc), rc);
1362 AssertLogRelReturn(!pVM->pgm.s.fPciPassthrough || pVM->pgm.s.fRamPreAlloc, VERR_INVALID_PARAMETER);
1363
1364 rc = CFGMR3QueryBoolDef(CFGMR3GetRoot(pVM), "PageFusionAllowed", &pVM->pgm.s.fPageFusionAllowed, false);
1365 AssertLogRelRCReturn(rc, rc);
1366
1367 /** @cfgm{/PGM/ZeroRamPagesOnReset, boolean, true}
1368 * Whether to clear RAM pages on (hard) reset. */
1369 rc = CFGMR3QueryBoolDef(pCfgPGM, "ZeroRamPagesOnReset", &pVM->pgm.s.fZeroRamPagesOnReset, true);
1370 AssertLogRelRCReturn(rc, rc);
1371
1372#ifdef VBOX_WITH_STATISTICS
1373 /*
1374 * Allocate memory for the statistics before someone tries to use them.
1375 */
1376 size_t cbTotalStats = RT_ALIGN_Z(sizeof(PGMSTATS), 64) + RT_ALIGN_Z(sizeof(PGMCPUSTATS), 64) * pVM->cCpus;
1377 void *pv;
1378 rc = MMHyperAlloc(pVM, RT_ALIGN_Z(cbTotalStats, PAGE_SIZE), PAGE_SIZE, MM_TAG_PGM, &pv);
1379 AssertRCReturn(rc, rc);
1380
1381 pVM->pgm.s.pStatsR3 = (PGMSTATS *)pv;
1382 pVM->pgm.s.pStatsR0 = MMHyperCCToR0(pVM, pv);
1383 pVM->pgm.s.pStatsRC = MMHyperCCToRC(pVM, pv);
1384 pv = (uint8_t *)pv + RT_ALIGN_Z(sizeof(PGMSTATS), 64);
1385
1386 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1387 {
1388 pVM->aCpus[iCpu].pgm.s.pStatsR3 = (PGMCPUSTATS *)pv;
1389 pVM->aCpus[iCpu].pgm.s.pStatsR0 = MMHyperCCToR0(pVM, pv);
1390 pVM->aCpus[iCpu].pgm.s.pStatsRC = MMHyperCCToRC(pVM, pv);
1391
1392 pv = (uint8_t *)pv + RT_ALIGN_Z(sizeof(PGMCPUSTATS), 64);
1393 }
1394#endif /* VBOX_WITH_STATISTICS */
1395
1396 /*
1397 * Register callbacks, string formatters and the saved state data unit.
1398 */
1399#ifdef VBOX_STRICT
1400 VMR3AtStateRegister(pVM->pUVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
1401#endif
1402 PGMRegisterStringFormatTypes();
1403
1404 rc = pgmR3InitSavedState(pVM, cbRam);
1405 if (RT_FAILURE(rc))
1406 return rc;
1407
1408 /*
1409 * Initialize the PGM critical section and flush the phys TLBs
1410 */
1411 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSectX, RT_SRC_POS, "PGM");
1412 AssertRCReturn(rc, rc);
1413
1414 PGMR3PhysChunkInvalidateTLB(pVM);
1415 pgmPhysInvalidatePageMapTLB(pVM);
1416
1417 /*
1418 * For the time being we sport a full set of handy pages in addition to the base
1419 * memory to simplify things.
1420 */
1421 rc = MMR3ReserveHandyPages(pVM, RT_ELEMENTS(pVM->pgm.s.aHandyPages)); /** @todo this should be changed to PGM_HANDY_PAGES_MIN but this needs proper testing... */
1422 AssertRCReturn(rc, rc);
1423
1424 /*
1425 * Trees
1426 */
1427 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesR3);
1428 if (RT_SUCCESS(rc))
1429 {
1430 pVM->pgm.s.pTreesR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pTreesR3);
1431 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
1432 }
1433
1434 /*
1435 * Allocate the zero page.
1436 */
1437 if (RT_SUCCESS(rc))
1438 {
1439 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
1440 if (RT_SUCCESS(rc))
1441 {
1442 pVM->pgm.s.pvZeroPgRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pvZeroPgR3);
1443 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1444 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
1445 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
1446 }
1447 }
1448
1449 /*
1450 * Allocate the invalid MMIO page.
1451 * (The invalid bits in HCPhysInvMmioPg are set later on init complete.)
1452 */
1453 if (RT_SUCCESS(rc))
1454 {
1455 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvMmioPgR3);
1456 if (RT_SUCCESS(rc))
1457 {
1458 ASMMemFill32(pVM->pgm.s.pvMmioPgR3, PAGE_SIZE, 0xfeedface);
1459 pVM->pgm.s.HCPhysMmioPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvMmioPgR3);
1460 AssertRelease(pVM->pgm.s.HCPhysMmioPg != NIL_RTHCPHYS);
1461 pVM->pgm.s.HCPhysInvMmioPg = pVM->pgm.s.HCPhysMmioPg;
1462 }
1463 }
1464
1465 /*
1466 * Register the physical access handler protecting ROMs.
1467 */
1468 if (RT_SUCCESS(rc))
1469 rc = PGMR3HandlerPhysicalTypeRegister(pVM, PGMPHYSHANDLERKIND_WRITE,
1470 pgmPhysRomWriteHandler,
1471 NULL, NULL, "pgmPhysRomWritePfHandler",
1472 NULL, NULL, "pgmPhysRomWritePfHandler",
1473 "ROM write protection",
1474 &pVM->pgm.s.hRomPhysHandlerType);
1475
1476 /*
1477 * Init the paging.
1478 */
1479 if (RT_SUCCESS(rc))
1480 rc = pgmR3InitPaging(pVM);
1481
1482 /*
1483 * Init the page pool.
1484 */
1485 if (RT_SUCCESS(rc))
1486 rc = pgmR3PoolInit(pVM);
1487
1488 if (RT_SUCCESS(rc))
1489 {
1490 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1491 {
1492 PVMCPU pVCpu = &pVM->aCpus[i];
1493 rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
1494 if (RT_FAILURE(rc))
1495 break;
1496 }
1497 }
1498
1499 if (RT_SUCCESS(rc))
1500 {
1501 /*
1502 * Info & statistics
1503 */
1504 DBGFR3InfoRegisterInternal(pVM, "mode",
1505 "Shows the current paging mode. "
1506 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing is given.",
1507 pgmR3InfoMode);
1508 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
1509 "Dumps all the entries in the top level paging table. No arguments.",
1510 pgmR3InfoCr3);
1511 DBGFR3InfoRegisterInternal(pVM, "phys",
1512 "Dumps all the physical address ranges. No arguments.",
1513 pgmR3PhysInfo);
1514 DBGFR3InfoRegisterInternal(pVM, "handlers",
1515 "Dumps physical, virtual and hyper virtual handlers. "
1516 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
1517 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
1518 pgmR3InfoHandlers);
1519 DBGFR3InfoRegisterInternal(pVM, "mappings",
1520 "Dumps guest mappings.",
1521 pgmR3MapInfo);
1522
1523 pgmR3InitStats(pVM);
1524
1525#ifdef VBOX_WITH_DEBUGGER
1526 /*
1527 * Debugger commands.
1528 */
1529 static bool s_fRegisteredCmds = false;
1530 if (!s_fRegisteredCmds)
1531 {
1532 int rc2 = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
1533 if (RT_SUCCESS(rc2))
1534 s_fRegisteredCmds = true;
1535 }
1536#endif
1537 return VINF_SUCCESS;
1538 }
1539
1540 /* Almost no cleanup necessary, MM frees all memory. */
1541 PDMR3CritSectDelete(&pVM->pgm.s.CritSectX);
1542
1543 return rc;
1544}
1545
1546
1547/**
1548 * Init paging.
1549 *
1550 * Since we need to check what mode the host is operating in before we can choose
1551 * the right paging functions for the host we have to delay this until R0 has
1552 * been initialized.
1553 *
1554 * @returns VBox status code.
1555 * @param pVM The cross context VM structure.
1556 */
1557static int pgmR3InitPaging(PVM pVM)
1558{
1559 /*
1560 * Force a recalculation of modes and switcher so everyone gets notified.
1561 */
1562 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1563 {
1564 PVMCPU pVCpu = &pVM->aCpus[i];
1565
1566 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
1567 pVCpu->pgm.s.enmGuestMode = PGMMODE_INVALID;
1568 }
1569
1570 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1571
1572 /*
1573 * Allocate static mapping space for whatever the cr3 register
1574 * points to and in the case of PAE mode to the 4 PDs.
1575 */
1576 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1577 if (RT_FAILURE(rc))
1578 {
1579 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Rrc\n", rc));
1580 return rc;
1581 }
1582 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1583
1584 /*
1585 * Allocate pages for the three possible intermediate contexts
1586 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1587 * for the sake of simplicity. The AMD64 uses the PAE for the
1588 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1589 *
1590 * We assume that two page tables will be enought for the core code
1591 * mappings (HC virtual and identity).
1592 */
1593 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPD, VERR_NO_PAGE_MEMORY);
1594 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.apInterPTs[0], VERR_NO_PAGE_MEMORY);
1595 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.apInterPTs[1], VERR_NO_PAGE_MEMORY);
1596 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePTs[0], VERR_NO_PAGE_MEMORY);
1597 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePTs[1], VERR_NO_PAGE_MEMORY);
1598 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[0], VERR_NO_PAGE_MEMORY);
1599 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[1], VERR_NO_PAGE_MEMORY);
1600 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[2], VERR_NO_PAGE_MEMORY);
1601 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[3], VERR_NO_PAGE_MEMORY);
1602 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPaePDPT, VERR_NO_PAGE_MEMORY);
1603 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPaePDPT64, VERR_NO_PAGE_MEMORY);
1604 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPaePML4, VERR_NO_PAGE_MEMORY);
1605
1606 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1607 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1608 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1609 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1610 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1611 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK) && pVM->pgm.s.HCPhysInterPaePML4 < 0xffffffff);
1612
1613 /*
1614 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1615 */
1616 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1617 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1618 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1619
1620 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1621 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1622
1623 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1624 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1625 {
1626 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1627 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1628 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1629 }
1630
1631 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1632 {
1633 const unsigned iPD = i % RT_ELEMENTS(pVM->pgm.s.apInterPaePDs);
1634 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1635 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1636 }
1637
1638 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1639 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1640 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1641 | HCPhysInterPaePDPT64;
1642
1643 /*
1644 * Initialize paging workers and mode from current host mode
1645 * and the guest running in real mode.
1646 */
1647 pVM->pgm.s.enmHostMode = SUPR3GetPagingMode();
1648 switch (pVM->pgm.s.enmHostMode)
1649 {
1650 case SUPPAGINGMODE_32_BIT:
1651 case SUPPAGINGMODE_32_BIT_GLOBAL:
1652 case SUPPAGINGMODE_PAE:
1653 case SUPPAGINGMODE_PAE_GLOBAL:
1654 case SUPPAGINGMODE_PAE_NX:
1655 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1656 break;
1657
1658 case SUPPAGINGMODE_AMD64:
1659 case SUPPAGINGMODE_AMD64_GLOBAL:
1660 case SUPPAGINGMODE_AMD64_NX:
1661 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1662 if (ARCH_BITS != 64)
1663 {
1664 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1665 LogRel(("PGM: Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1666 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1667 }
1668 break;
1669 default:
1670 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1671 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1672 }
1673 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1674 if (RT_SUCCESS(rc))
1675 {
1676 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1677#if HC_ARCH_BITS == 64
1678 LogRel(("PGM: HCPhysInterPD=%RHp HCPhysInterPaePDPT=%RHp HCPhysInterPaePML4=%RHp\n",
1679 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1680 LogRel(("PGM: apInterPTs={%RHp,%RHp} apInterPaePTs={%RHp,%RHp} apInterPaePDs={%RHp,%RHp,%RHp,%RHp} pInterPaePDPT64=%RHp\n",
1681 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1682 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1683 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1684 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1685#endif
1686
1687 /*
1688 * Log the host paging mode. It may come in handy.
1689 */
1690 const char *pszHostMode;
1691 switch (pVM->pgm.s.enmHostMode)
1692 {
1693 case SUPPAGINGMODE_32_BIT: pszHostMode = "32-bit"; break;
1694 case SUPPAGINGMODE_32_BIT_GLOBAL: pszHostMode = "32-bit+PGE"; break;
1695 case SUPPAGINGMODE_PAE: pszHostMode = "PAE"; break;
1696 case SUPPAGINGMODE_PAE_GLOBAL: pszHostMode = "PAE+PGE"; break;
1697 case SUPPAGINGMODE_PAE_NX: pszHostMode = "PAE+NXE"; break;
1698 case SUPPAGINGMODE_PAE_GLOBAL_NX: pszHostMode = "PAE+PGE+NXE"; break;
1699 case SUPPAGINGMODE_AMD64: pszHostMode = "AMD64"; break;
1700 case SUPPAGINGMODE_AMD64_GLOBAL: pszHostMode = "AMD64+PGE"; break;
1701 case SUPPAGINGMODE_AMD64_NX: pszHostMode = "AMD64+NX"; break;
1702 case SUPPAGINGMODE_AMD64_GLOBAL_NX: pszHostMode = "AMD64+PGE+NX"; break;
1703 default: pszHostMode = "???"; break;
1704 }
1705 LogRel(("PGM: Host paging mode: %s\n", pszHostMode));
1706
1707 return VINF_SUCCESS;
1708 }
1709
1710 LogFlow(("pgmR3InitPaging: returns %Rrc\n", rc));
1711 return rc;
1712}
1713
1714
1715/**
1716 * Init statistics
1717 * @returns VBox status code.
1718 */
1719static int pgmR3InitStats(PVM pVM)
1720{
1721 PPGM pPGM = &pVM->pgm.s;
1722 int rc;
1723
1724 /*
1725 * Release statistics.
1726 */
1727 /* Common - misc variables */
1728 STAM_REL_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_COUNT, "The total number of pages.");
1729 STAM_REL_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_COUNT, "The number of private pages.");
1730 STAM_REL_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_COUNT, "The number of shared pages.");
1731 STAM_REL_REG(pVM, &pPGM->cReusedSharedPages, STAMTYPE_U32, "/PGM/Page/cReusedSharedPages", STAMUNIT_COUNT, "The number of reused shared pages.");
1732 STAM_REL_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_COUNT, "The number of zero backed pages.");
1733 STAM_REL_REG(pVM, &pPGM->cPureMmioPages, STAMTYPE_U32, "/PGM/Page/cPureMmioPages", STAMUNIT_COUNT, "The number of pure MMIO pages.");
1734 STAM_REL_REG(pVM, &pPGM->cMonitoredPages, STAMTYPE_U32, "/PGM/Page/cMonitoredPages", STAMUNIT_COUNT, "The number of write monitored pages.");
1735 STAM_REL_REG(pVM, &pPGM->cWrittenToPages, STAMTYPE_U32, "/PGM/Page/cWrittenToPages", STAMUNIT_COUNT, "The number of previously write monitored pages that have been written to.");
1736 STAM_REL_REG(pVM, &pPGM->cWriteLockedPages, STAMTYPE_U32, "/PGM/Page/cWriteLockedPages", STAMUNIT_COUNT, "The number of write(/read) locked pages.");
1737 STAM_REL_REG(pVM, &pPGM->cReadLockedPages, STAMTYPE_U32, "/PGM/Page/cReadLockedPages", STAMUNIT_COUNT, "The number of read (only) locked pages.");
1738 STAM_REL_REG(pVM, &pPGM->cBalloonedPages, STAMTYPE_U32, "/PGM/Page/cBalloonedPages", STAMUNIT_COUNT, "The number of ballooned pages.");
1739 STAM_REL_REG(pVM, &pPGM->cHandyPages, STAMTYPE_U32, "/PGM/Page/cHandyPages", STAMUNIT_COUNT, "The number of handy pages (not included in cAllPages).");
1740 STAM_REL_REG(pVM, &pPGM->cLargePages, STAMTYPE_U32, "/PGM/Page/cLargePages", STAMUNIT_COUNT, "The number of large pages allocated (includes disabled).");
1741 STAM_REL_REG(pVM, &pPGM->cLargePagesDisabled, STAMTYPE_U32, "/PGM/Page/cLargePagesDisabled", STAMUNIT_COUNT, "The number of disabled large pages.");
1742 STAM_REL_REG(pVM, &pPGM->cRelocations, STAMTYPE_COUNTER, "/PGM/cRelocations", STAMUNIT_OCCURENCES,"Number of hypervisor relocations.");
1743 STAM_REL_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_COUNT, "Number of mapped chunks.");
1744 STAM_REL_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_COUNT, "Maximum number of mapped chunks.");
1745 STAM_REL_REG(pVM, &pPGM->cMappedChunks, STAMTYPE_U32, "/PGM/ChunkR3Map/Mapped", STAMUNIT_COUNT, "Number of times we mapped a chunk.");
1746 STAM_REL_REG(pVM, &pPGM->cUnmappedChunks, STAMTYPE_U32, "/PGM/ChunkR3Map/Unmapped", STAMUNIT_COUNT, "Number of times we unmapped a chunk.");
1747
1748 STAM_REL_REG(pVM, &pPGM->StatLargePageReused, STAMTYPE_COUNTER, "/PGM/LargePage/Reused", STAMUNIT_OCCURENCES, "The number of times we've reused a large page.");
1749 STAM_REL_REG(pVM, &pPGM->StatLargePageRefused, STAMTYPE_COUNTER, "/PGM/LargePage/Refused", STAMUNIT_OCCURENCES, "The number of times we couldn't use a large page.");
1750 STAM_REL_REG(pVM, &pPGM->StatLargePageRecheck, STAMTYPE_COUNTER, "/PGM/LargePage/Recheck", STAMUNIT_OCCURENCES, "The number of times we've rechecked a disabled large page.");
1751
1752 STAM_REL_REG(pVM, &pPGM->StatShModCheck, STAMTYPE_PROFILE, "/PGM/ShMod/Check", STAMUNIT_TICKS_PER_CALL, "Profiles the shared module checking.");
1753
1754 /* Live save */
1755 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.fActive, STAMTYPE_U8, "/PGM/LiveSave/fActive", STAMUNIT_COUNT, "Active or not.");
1756 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cIgnoredPages, STAMTYPE_U32, "/PGM/LiveSave/cIgnoredPages", STAMUNIT_COUNT, "The number of ignored pages in the RAM ranges (i.e. MMIO, MMIO2 and ROM).");
1757 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cDirtyPagesLong, STAMTYPE_U32, "/PGM/LiveSave/cDirtyPagesLong", STAMUNIT_COUNT, "Longer term dirty page average.");
1758 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cDirtyPagesShort, STAMTYPE_U32, "/PGM/LiveSave/cDirtyPagesShort", STAMUNIT_COUNT, "Short term dirty page average.");
1759 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cPagesPerSecond, STAMTYPE_U32, "/PGM/LiveSave/cPagesPerSecond", STAMUNIT_COUNT, "Pages per second.");
1760 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cSavedPages, STAMTYPE_U64, "/PGM/LiveSave/cSavedPages", STAMUNIT_COUNT, "The total number of saved pages.");
1761 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cReadyPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cReadPages", STAMUNIT_COUNT, "RAM: Ready pages.");
1762 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cDirtyPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cDirtyPages", STAMUNIT_COUNT, "RAM: Dirty pages.");
1763 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cZeroPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cZeroPages", STAMUNIT_COUNT, "RAM: Ready zero pages.");
1764 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cMonitoredPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cMonitoredPages", STAMUNIT_COUNT, "RAM: Write monitored pages.");
1765 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cReadyPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cReadPages", STAMUNIT_COUNT, "ROM: Ready pages.");
1766 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cDirtyPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cDirtyPages", STAMUNIT_COUNT, "ROM: Dirty pages.");
1767 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cZeroPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cZeroPages", STAMUNIT_COUNT, "ROM: Ready zero pages.");
1768 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cMonitoredPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cMonitoredPages", STAMUNIT_COUNT, "ROM: Write monitored pages.");
1769 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cReadyPages, STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cReadPages", STAMUNIT_COUNT, "MMIO2: Ready pages.");
1770 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cDirtyPages, STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cDirtyPages", STAMUNIT_COUNT, "MMIO2: Dirty pages.");
1771 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cZeroPages, STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cZeroPages", STAMUNIT_COUNT, "MMIO2: Ready zero pages.");
1772 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cMonitoredPages,STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cMonitoredPages",STAMUNIT_COUNT, "MMIO2: Write monitored pages.");
1773
1774#ifdef VBOX_WITH_STATISTICS
1775
1776# define PGM_REG_COUNTER(a, b, c) \
1777 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, c, b); \
1778 AssertRC(rc);
1779
1780# define PGM_REG_COUNTER_BYTES(a, b, c) \
1781 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_BYTES, c, b); \
1782 AssertRC(rc);
1783
1784# define PGM_REG_PROFILE(a, b, c) \
1785 rc = STAMR3RegisterF(pVM, a, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL, c, b); \
1786 AssertRC(rc);
1787
1788 PGMSTATS *pStats = pVM->pgm.s.pStatsR3;
1789
1790 PGM_REG_PROFILE(&pStats->StatAllocLargePage, "/PGM/LargePage/Prof/Alloc", "Time spent by the host OS for large page allocation.");
1791 PGM_REG_PROFILE(&pStats->StatClearLargePage, "/PGM/LargePage/Prof/Clear", "Time spent clearing the newly allocated large pages.");
1792 PGM_REG_COUNTER(&pStats->StatLargePageOverflow, "/PGM/LargePage/Overflow", "The number of times allocating a large page took too long.");
1793 PGM_REG_PROFILE(&pStats->StatR3IsValidLargePage, "/PGM/LargePage/Prof/R3/IsValid", "pgmPhysIsValidLargePage profiling - R3.");
1794 PGM_REG_PROFILE(&pStats->StatRZIsValidLargePage, "/PGM/LargePage/Prof/RZ/IsValid", "pgmPhysIsValidLargePage profiling - RZ.");
1795
1796 PGM_REG_COUNTER(&pStats->StatR3DetectedConflicts, "/PGM/R3/DetectedConflicts", "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1797 PGM_REG_PROFILE(&pStats->StatR3ResolveConflict, "/PGM/R3/ResolveConflict", "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1798 PGM_REG_COUNTER(&pStats->StatR3PhysRead, "/PGM/R3/Phys/Read", "The number of times PGMPhysRead was called.");
1799 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysReadBytes, "/PGM/R3/Phys/Read/Bytes", "The number of bytes read by PGMPhysRead.");
1800 PGM_REG_COUNTER(&pStats->StatR3PhysWrite, "/PGM/R3/Phys/Write", "The number of times PGMPhysWrite was called.");
1801 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysWriteBytes, "/PGM/R3/Phys/Write/Bytes", "The number of bytes written by PGMPhysWrite.");
1802 PGM_REG_COUNTER(&pStats->StatR3PhysSimpleRead, "/PGM/R3/Phys/Simple/Read", "The number of times PGMPhysSimpleReadGCPtr was called.");
1803 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysSimpleReadBytes, "/PGM/R3/Phys/Simple/Read/Bytes", "The number of bytes read by PGMPhysSimpleReadGCPtr.");
1804 PGM_REG_COUNTER(&pStats->StatR3PhysSimpleWrite, "/PGM/R3/Phys/Simple/Write", "The number of times PGMPhysSimpleWriteGCPtr was called.");
1805 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysSimpleWriteBytes, "/PGM/R3/Phys/Simple/Write/Bytes", "The number of bytes written by PGMPhysSimpleWriteGCPtr.");
1806
1807 PGM_REG_COUNTER(&pStats->StatRZChunkR3MapTlbHits, "/PGM/ChunkR3Map/TlbHitsRZ", "TLB hits.");
1808 PGM_REG_COUNTER(&pStats->StatRZChunkR3MapTlbMisses, "/PGM/ChunkR3Map/TlbMissesRZ", "TLB misses.");
1809 PGM_REG_PROFILE(&pStats->StatChunkAging, "/PGM/ChunkR3Map/Map/Aging", "Chunk aging profiling.");
1810 PGM_REG_PROFILE(&pStats->StatChunkFindCandidate, "/PGM/ChunkR3Map/Map/Find", "Chunk unmap find profiling.");
1811 PGM_REG_PROFILE(&pStats->StatChunkUnmap, "/PGM/ChunkR3Map/Map/Unmap", "Chunk unmap of address space profiling.");
1812 PGM_REG_PROFILE(&pStats->StatChunkMap, "/PGM/ChunkR3Map/Map/Map", "Chunk map of address space profiling.");
1813
1814 PGM_REG_COUNTER(&pStats->StatRZPageMapTlbHits, "/PGM/RZ/Page/MapTlbHits", "TLB hits.");
1815 PGM_REG_COUNTER(&pStats->StatRZPageMapTlbMisses, "/PGM/RZ/Page/MapTlbMisses", "TLB misses.");
1816 PGM_REG_COUNTER(&pStats->StatR3ChunkR3MapTlbHits, "/PGM/ChunkR3Map/TlbHitsR3", "TLB hits.");
1817 PGM_REG_COUNTER(&pStats->StatR3ChunkR3MapTlbMisses, "/PGM/ChunkR3Map/TlbMissesR3", "TLB misses.");
1818 PGM_REG_COUNTER(&pStats->StatR3PageMapTlbHits, "/PGM/R3/Page/MapTlbHits", "TLB hits.");
1819 PGM_REG_COUNTER(&pStats->StatR3PageMapTlbMisses, "/PGM/R3/Page/MapTlbMisses", "TLB misses.");
1820 PGM_REG_COUNTER(&pStats->StatPageMapTlbFlushes, "/PGM/R3/Page/MapTlbFlushes", "TLB flushes (all contexts).");
1821 PGM_REG_COUNTER(&pStats->StatPageMapTlbFlushEntry, "/PGM/R3/Page/MapTlbFlushEntry", "TLB entry flushes (all contexts).");
1822
1823 PGM_REG_COUNTER(&pStats->StatRZRamRangeTlbHits, "/PGM/RZ/RamRange/TlbHits", "TLB hits.");
1824 PGM_REG_COUNTER(&pStats->StatRZRamRangeTlbMisses, "/PGM/RZ/RamRange/TlbMisses", "TLB misses.");
1825 PGM_REG_COUNTER(&pStats->StatR3RamRangeTlbHits, "/PGM/R3/RamRange/TlbHits", "TLB hits.");
1826 PGM_REG_COUNTER(&pStats->StatR3RamRangeTlbMisses, "/PGM/R3/RamRange/TlbMisses", "TLB misses.");
1827
1828 PGM_REG_PROFILE(&pStats->StatRZSyncCR3HandlerVirtualUpdate, "/PGM/RZ/SyncCR3/Handlers/VirtualUpdate", "Profiling of the virtual handler updates.");
1829 PGM_REG_PROFILE(&pStats->StatRZSyncCR3HandlerVirtualReset, "/PGM/RZ/SyncCR3/Handlers/VirtualReset", "Profiling of the virtual handler resets.");
1830 PGM_REG_PROFILE(&pStats->StatR3SyncCR3HandlerVirtualUpdate, "/PGM/R3/SyncCR3/Handlers/VirtualUpdate", "Profiling of the virtual handler updates.");
1831 PGM_REG_PROFILE(&pStats->StatR3SyncCR3HandlerVirtualReset, "/PGM/R3/SyncCR3/Handlers/VirtualReset", "Profiling of the virtual handler resets.");
1832
1833 PGM_REG_COUNTER(&pStats->StatRZPhysHandlerReset, "/PGM/RZ/PhysHandlerReset", "The number of times PGMHandlerPhysicalReset is called.");
1834 PGM_REG_COUNTER(&pStats->StatR3PhysHandlerReset, "/PGM/R3/PhysHandlerReset", "The number of times PGMHandlerPhysicalReset is called.");
1835 PGM_REG_COUNTER(&pStats->StatRZPhysHandlerLookupHits, "/PGM/RZ/PhysHandlerLookupHits", "The number of cache hits when looking up physical handlers.");
1836 PGM_REG_COUNTER(&pStats->StatR3PhysHandlerLookupHits, "/PGM/R3/PhysHandlerLookupHits", "The number of cache hits when looking up physical handlers.");
1837 PGM_REG_COUNTER(&pStats->StatRZPhysHandlerLookupMisses, "/PGM/RZ/PhysHandlerLookupMisses", "The number of cache misses when looking up physical handlers.");
1838 PGM_REG_COUNTER(&pStats->StatR3PhysHandlerLookupMisses, "/PGM/R3/PhysHandlerLookupMisses", "The number of cache misses when looking up physical handlers.");
1839 PGM_REG_PROFILE(&pStats->StatRZVirtHandlerSearchByPhys, "/PGM/RZ/VirtHandlerSearchByPhys", "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1840 PGM_REG_PROFILE(&pStats->StatR3VirtHandlerSearchByPhys, "/PGM/R3/VirtHandlerSearchByPhys", "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1841
1842 PGM_REG_COUNTER(&pStats->StatRZPageReplaceShared, "/PGM/RZ/Page/ReplacedShared", "Times a shared page was replaced.");
1843 PGM_REG_COUNTER(&pStats->StatRZPageReplaceZero, "/PGM/RZ/Page/ReplacedZero", "Times the zero page was replaced.");
1844/// @todo PGM_REG_COUNTER(&pStats->StatRZPageHandyAllocs, "/PGM/RZ/Page/HandyAllocs", "Number of times we've allocated more handy pages.");
1845 PGM_REG_COUNTER(&pStats->StatR3PageReplaceShared, "/PGM/R3/Page/ReplacedShared", "Times a shared page was replaced.");
1846 PGM_REG_COUNTER(&pStats->StatR3PageReplaceZero, "/PGM/R3/Page/ReplacedZero", "Times the zero page was replaced.");
1847/// @todo PGM_REG_COUNTER(&pStats->StatR3PageHandyAllocs, "/PGM/R3/Page/HandyAllocs", "Number of times we've allocated more handy pages.");
1848
1849 PGM_REG_COUNTER(&pStats->StatRZPhysRead, "/PGM/RZ/Phys/Read", "The number of times PGMPhysRead was called.");
1850 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysReadBytes, "/PGM/RZ/Phys/Read/Bytes", "The number of bytes read by PGMPhysRead.");
1851 PGM_REG_COUNTER(&pStats->StatRZPhysWrite, "/PGM/RZ/Phys/Write", "The number of times PGMPhysWrite was called.");
1852 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysWriteBytes, "/PGM/RZ/Phys/Write/Bytes", "The number of bytes written by PGMPhysWrite.");
1853 PGM_REG_COUNTER(&pStats->StatRZPhysSimpleRead, "/PGM/RZ/Phys/Simple/Read", "The number of times PGMPhysSimpleReadGCPtr was called.");
1854 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysSimpleReadBytes, "/PGM/RZ/Phys/Simple/Read/Bytes", "The number of bytes read by PGMPhysSimpleReadGCPtr.");
1855 PGM_REG_COUNTER(&pStats->StatRZPhysSimpleWrite, "/PGM/RZ/Phys/Simple/Write", "The number of times PGMPhysSimpleWriteGCPtr was called.");
1856 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysSimpleWriteBytes, "/PGM/RZ/Phys/Simple/Write/Bytes", "The number of bytes written by PGMPhysSimpleWriteGCPtr.");
1857
1858 /* GC only: */
1859 PGM_REG_COUNTER(&pStats->StatRCInvlPgConflict, "/PGM/RC/InvlPgConflict", "Number of times PGMInvalidatePage() detected a mapping conflict.");
1860 PGM_REG_COUNTER(&pStats->StatRCInvlPgSyncMonCR3, "/PGM/RC/InvlPgSyncMonitorCR3", "Number of times PGMInvalidatePage() ran into PGM_SYNC_MONITOR_CR3.");
1861
1862 PGM_REG_COUNTER(&pStats->StatRCPhysRead, "/PGM/RC/Phys/Read", "The number of times PGMPhysRead was called.");
1863 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysReadBytes, "/PGM/RC/Phys/Read/Bytes", "The number of bytes read by PGMPhysRead.");
1864 PGM_REG_COUNTER(&pStats->StatRCPhysWrite, "/PGM/RC/Phys/Write", "The number of times PGMPhysWrite was called.");
1865 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysWriteBytes, "/PGM/RC/Phys/Write/Bytes", "The number of bytes written by PGMPhysWrite.");
1866 PGM_REG_COUNTER(&pStats->StatRCPhysSimpleRead, "/PGM/RC/Phys/Simple/Read", "The number of times PGMPhysSimpleReadGCPtr was called.");
1867 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysSimpleReadBytes, "/PGM/RC/Phys/Simple/Read/Bytes", "The number of bytes read by PGMPhysSimpleReadGCPtr.");
1868 PGM_REG_COUNTER(&pStats->StatRCPhysSimpleWrite, "/PGM/RC/Phys/Simple/Write", "The number of times PGMPhysSimpleWriteGCPtr was called.");
1869 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysSimpleWriteBytes, "/PGM/RC/Phys/Simple/Write/Bytes", "The number of bytes written by PGMPhysSimpleWriteGCPtr.");
1870
1871 PGM_REG_COUNTER(&pStats->StatTrackVirgin, "/PGM/Track/Virgin", "The number of first time shadowings");
1872 PGM_REG_COUNTER(&pStats->StatTrackAliased, "/PGM/Track/Aliased", "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1873 PGM_REG_COUNTER(&pStats->StatTrackAliasedMany, "/PGM/Track/AliasedMany", "The number of times we're tracking using cRef2.");
1874 PGM_REG_COUNTER(&pStats->StatTrackAliasedLots, "/PGM/Track/AliasedLots", "The number of times we're hitting pages which has overflowed cRef2");
1875 PGM_REG_COUNTER(&pStats->StatTrackOverflows, "/PGM/Track/Overflows", "The number of times the extent list grows too long.");
1876 PGM_REG_COUNTER(&pStats->StatTrackNoExtentsLeft, "/PGM/Track/NoExtentLeft", "The number of times the extent list was exhausted.");
1877 PGM_REG_PROFILE(&pStats->StatTrackDeref, "/PGM/Track/Deref", "Profiling of SyncPageWorkerTrackDeref (expensive).");
1878
1879# undef PGM_REG_COUNTER
1880# undef PGM_REG_PROFILE
1881#endif
1882
1883 /*
1884 * Note! The layout below matches the member layout exactly!
1885 */
1886
1887 /*
1888 * Common - stats
1889 */
1890 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1891 {
1892 PPGMCPU pPgmCpu = &pVM->aCpus[idCpu].pgm.s;
1893
1894#define PGM_REG_COUNTER(a, b, c) \
1895 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, c, b, idCpu); \
1896 AssertRC(rc);
1897#define PGM_REG_PROFILE(a, b, c) \
1898 rc = STAMR3RegisterF(pVM, a, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL, c, b, idCpu); \
1899 AssertRC(rc);
1900
1901 PGM_REG_COUNTER(&pPgmCpu->cGuestModeChanges, "/PGM/CPU%u/cGuestModeChanges", "Number of guest mode changes.");
1902 PGM_REG_COUNTER(&pPgmCpu->cA20Changes, "/PGM/CPU%u/cA20Changes", "Number of A20 gate changes.");
1903
1904#ifdef VBOX_WITH_STATISTICS
1905 PGMCPUSTATS *pCpuStats = pVM->aCpus[idCpu].pgm.s.pStatsR3;
1906
1907# if 0 /* rarely useful; leave for debugging. */
1908 for (unsigned j = 0; j < RT_ELEMENTS(pPgmCpu->StatSyncPtPD); j++)
1909 STAMR3RegisterF(pVM, &pCpuStats->StatSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1910 "The number of SyncPT per PD n.", "/PGM/CPU%u/PDSyncPT/%04X", i, j);
1911 for (unsigned j = 0; j < RT_ELEMENTS(pCpuStats->StatSyncPagePD); j++)
1912 STAMR3RegisterF(pVM, &pCpuStats->StatSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1913 "The number of SyncPage per PD n.", "/PGM/CPU%u/PDSyncPage/%04X", i, j);
1914# endif
1915 /* R0 only: */
1916 PGM_REG_PROFILE(&pCpuStats->StatR0NpMiscfg, "/PGM/CPU%u/R0/NpMiscfg", "PGMR0Trap0eHandlerNPMisconfig() profiling.");
1917 PGM_REG_COUNTER(&pCpuStats->StatR0NpMiscfgSyncPage, "/PGM/CPU%u/R0/NpMiscfgSyncPage", "SyncPage calls from PGMR0Trap0eHandlerNPMisconfig().");
1918
1919 /* RZ only: */
1920 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0e, "/PGM/CPU%u/RZ/Trap0e", "Profiling of the PGMTrap0eHandler() body.");
1921 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Ballooned, "/PGM/CPU%u/RZ/Trap0e/Time2/Ballooned", "Profiling of the Trap0eHandler body when the cause is read access to a ballooned page.");
1922 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2CSAM, "/PGM/CPU%u/RZ/Trap0e/Time2/CSAM", "Profiling of the Trap0eHandler body when the cause is CSAM.");
1923 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2DirtyAndAccessed, "/PGM/CPU%u/RZ/Trap0e/Time2/DirtyAndAccessedBits", "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1924 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2GuestTrap, "/PGM/CPU%u/RZ/Trap0e/Time2/GuestTrap", "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1925 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2HndPhys, "/PGM/CPU%u/RZ/Trap0e/Time2/HandlerPhysical", "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1926 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2HndVirt, "/PGM/CPU%u/RZ/Trap0e/Time2/HandlerVirtual", "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1927 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2HndUnhandled, "/PGM/CPU%u/RZ/Trap0e/Time2/HandlerUnhandled", "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1928 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2InvalidPhys, "/PGM/CPU%u/RZ/Trap0e/Time2/InvalidPhys", "Profiling of the Trap0eHandler body when the cause is access to an invalid physical guest address.");
1929 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2MakeWritable, "/PGM/CPU%u/RZ/Trap0e/Time2/MakeWritable", "Profiling of the Trap0eHandler body when the cause is that a page needed to be made writeable.");
1930 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Mapping, "/PGM/CPU%u/RZ/Trap0e/Time2/Mapping", "Profiling of the Trap0eHandler body when the cause is related to the guest mappings.");
1931 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Misc, "/PGM/CPU%u/RZ/Trap0e/Time2/Misc", "Profiling of the Trap0eHandler body when the cause is not known.");
1932 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSync, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSync", "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1933 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSyncHndPhys, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSyncHndPhys", "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1934 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSyncHndVirt, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSyncHndVirt", "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1935 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSyncHndObs, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSyncObsHnd", "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1936 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2SyncPT, "/PGM/CPU%u/RZ/Trap0e/Time2/SyncPT", "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1937 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2WPEmulation, "/PGM/CPU%u/RZ/Trap0e/Time2/WPEmulation", "Profiling of the Trap0eHandler body when the cause is CR0.WP emulation.");
1938 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Wp0RoUsHack, "/PGM/CPU%u/RZ/Trap0e/Time2/WP0R0USHack", "Profiling of the Trap0eHandler body when the cause is CR0.WP and netware hack to be enabled.");
1939 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Wp0RoUsUnhack, "/PGM/CPU%u/RZ/Trap0e/Time2/WP0R0USUnhack", "Profiling of the Trap0eHandler body when the cause is CR0.WP and netware hack to be disabled.");
1940 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eConflicts, "/PGM/CPU%u/RZ/Trap0e/Conflicts", "The number of times #PF was caused by an undetected conflict.");
1941 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersMapping, "/PGM/CPU%u/RZ/Trap0e/Handlers/Mapping", "Number of traps due to access handlers in mappings.");
1942 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersOutOfSync, "/PGM/CPU%u/RZ/Trap0e/Handlers/OutOfSync", "Number of traps due to out-of-sync handled pages.");
1943 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersPhysAll, "/PGM/CPU%u/RZ/Trap0e/Handlers/PhysAll", "Number of traps due to physical all-access handlers.");
1944 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersPhysAllOpt, "/PGM/CPU%u/RZ/Trap0e/Handlers/PhysAllOpt", "Number of the physical all-access handler traps using the optimization.");
1945 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersPhysWrite, "/PGM/CPU%u/RZ/Trap0e/Handlers/PhysWrite", "Number of traps due to physical write-access handlers.");
1946 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersVirtual, "/PGM/CPU%u/RZ/Trap0e/Handlers/Virtual", "Number of traps due to virtual access handlers.");
1947 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersVirtualByPhys, "/PGM/CPU%u/RZ/Trap0e/Handlers/VirtualByPhys", "Number of traps due to virtual access handlers by physical address.");
1948 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersVirtualUnmarked,"/PGM/CPU%u/RZ/Trap0e/Handlers/VirtualUnmarked","Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1949 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersUnhandled, "/PGM/CPU%u/RZ/Trap0e/Handlers/Unhandled", "Number of traps due to access outside range of monitored page(s).");
1950 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersInvalid, "/PGM/CPU%u/RZ/Trap0e/Handlers/Invalid", "Number of traps due to access to invalid physical memory.");
1951 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSNotPresentRead, "/PGM/CPU%u/RZ/Trap0e/Err/User/NPRead", "Number of user mode not present read page faults.");
1952 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSNotPresentWrite, "/PGM/CPU%u/RZ/Trap0e/Err/User/NPWrite", "Number of user mode not present write page faults.");
1953 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSWrite, "/PGM/CPU%u/RZ/Trap0e/Err/User/Write", "Number of user mode write page faults.");
1954 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSReserved, "/PGM/CPU%u/RZ/Trap0e/Err/User/Reserved", "Number of user mode reserved bit page faults.");
1955 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSNXE, "/PGM/CPU%u/RZ/Trap0e/Err/User/NXE", "Number of user mode NXE page faults.");
1956 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSRead, "/PGM/CPU%u/RZ/Trap0e/Err/User/Read", "Number of user mode read page faults.");
1957 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVNotPresentRead, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/NPRead", "Number of supervisor mode not present read page faults.");
1958 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVNotPresentWrite, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/NPWrite", "Number of supervisor mode not present write page faults.");
1959 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVWrite, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/Write", "Number of supervisor mode write page faults.");
1960 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVReserved, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/Reserved", "Number of supervisor mode reserved bit page faults.");
1961 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSNXE, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/NXE", "Number of supervisor mode NXE page faults.");
1962 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eGuestPF, "/PGM/CPU%u/RZ/Trap0e/GuestPF", "Number of real guest page faults.");
1963 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eGuestPFMapping, "/PGM/CPU%u/RZ/Trap0e/GuestPF/InMapping", "Number of real guest page faults in a mapping.");
1964 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eWPEmulInRZ, "/PGM/CPU%u/RZ/Trap0e/WP/InRZ", "Number of guest page faults due to X86_CR0_WP emulation.");
1965 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eWPEmulToR3, "/PGM/CPU%u/RZ/Trap0e/WP/ToR3", "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1966#if 0 /* rarely useful; leave for debugging. */
1967 for (unsigned j = 0; j < RT_ELEMENTS(pCpuStats->StatRZTrap0ePD); j++)
1968 STAMR3RegisterF(pVM, &pCpuStats->StatRZTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1969 "The number of traps in page directory n.", "/PGM/CPU%u/RZ/Trap0e/PD/%04X", i, j);
1970#endif
1971 PGM_REG_COUNTER(&pCpuStats->StatRZGuestCR3WriteHandled, "/PGM/CPU%u/RZ/CR3WriteHandled", "The number of times the Guest CR3 change was successfully handled.");
1972 PGM_REG_COUNTER(&pCpuStats->StatRZGuestCR3WriteUnhandled, "/PGM/CPU%u/RZ/CR3WriteUnhandled", "The number of times the Guest CR3 change was passed back to the recompiler.");
1973 PGM_REG_COUNTER(&pCpuStats->StatRZGuestCR3WriteConflict, "/PGM/CPU%u/RZ/CR3WriteConflict", "The number of times the Guest CR3 monitoring detected a conflict.");
1974 PGM_REG_COUNTER(&pCpuStats->StatRZGuestROMWriteHandled, "/PGM/CPU%u/RZ/ROMWriteHandled", "The number of times the Guest ROM change was successfully handled.");
1975 PGM_REG_COUNTER(&pCpuStats->StatRZGuestROMWriteUnhandled, "/PGM/CPU%u/RZ/ROMWriteUnhandled", "The number of times the Guest ROM change was passed back to the recompiler.");
1976
1977 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapMigrateInvlPg, "/PGM/CPU%u/RZ/DynMap/MigrateInvlPg", "invlpg count in PGMR0DynMapMigrateAutoSet.");
1978 PGM_REG_PROFILE(&pCpuStats->StatRZDynMapGCPageInl, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl", "Calls to pgmR0DynMapGCPageInlined.");
1979 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlHits, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/Hits", "Hash table lookup hits.");
1980 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlMisses, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/Misses", "Misses that falls back to the code common.");
1981 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlRamHits, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/RamHits", "1st ram range hits.");
1982 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlRamMisses, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/RamMisses", "1st ram range misses, takes slow path.");
1983 PGM_REG_PROFILE(&pCpuStats->StatRZDynMapHCPageInl, "/PGM/CPU%u/RZ/DynMap/PageHCPageInl", "Calls to pgmRZDynMapHCPageInlined.");
1984 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapHCPageInlHits, "/PGM/CPU%u/RZ/DynMap/PageHCPageInl/Hits", "Hash table lookup hits.");
1985 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapHCPageInlMisses, "/PGM/CPU%u/RZ/DynMap/PageHCPageInl/Misses", "Misses that falls back to the code common.");
1986 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPage, "/PGM/CPU%u/RZ/DynMap/Page", "Calls to pgmR0DynMapPage");
1987 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetOptimize, "/PGM/CPU%u/RZ/DynMap/Page/SetOptimize", "Calls to pgmRZDynMapOptimizeAutoSet.");
1988 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetSearchFlushes, "/PGM/CPU%u/RZ/DynMap/Page/SetSearchFlushes", "Set search restoring to subset flushes.");
1989 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetSearchHits, "/PGM/CPU%u/RZ/DynMap/Page/SetSearchHits", "Set search hits.");
1990 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetSearchMisses, "/PGM/CPU%u/RZ/DynMap/Page/SetSearchMisses", "Set search misses.");
1991 PGM_REG_PROFILE(&pCpuStats->StatRZDynMapHCPage, "/PGM/CPU%u/RZ/DynMap/Page/HCPage", "Calls to pgmRZDynMapHCPageCommon (ring-0).");
1992 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageHits0, "/PGM/CPU%u/RZ/DynMap/Page/Hits0", "Hits at iPage+0");
1993 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageHits1, "/PGM/CPU%u/RZ/DynMap/Page/Hits1", "Hits at iPage+1");
1994 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageHits2, "/PGM/CPU%u/RZ/DynMap/Page/Hits2", "Hits at iPage+2");
1995 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageInvlPg, "/PGM/CPU%u/RZ/DynMap/Page/InvlPg", "invlpg count in pgmR0DynMapPageSlow.");
1996 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlow, "/PGM/CPU%u/RZ/DynMap/Page/Slow", "Calls to pgmR0DynMapPageSlow - subtract this from pgmR0DynMapPage to get 1st level hits.");
1997 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlowLoopHits, "/PGM/CPU%u/RZ/DynMap/Page/SlowLoopHits" , "Hits in the loop path.");
1998 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlowLoopMisses, "/PGM/CPU%u/RZ/DynMap/Page/SlowLoopMisses", "Misses in the loop path. NonLoopMisses = Slow - SlowLoopHit - SlowLoopMisses");
1999 //PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlowLostHits, "/PGM/CPU%u/R0/DynMap/Page/SlowLostHits", "Lost hits.");
2000 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSubsets, "/PGM/CPU%u/RZ/DynMap/Subsets", "Times PGMRZDynMapPushAutoSubset was called.");
2001 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPopFlushes, "/PGM/CPU%u/RZ/DynMap/SubsetPopFlushes", "Times PGMRZDynMapPopAutoSubset flushes the subset.");
2002 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[0], "/PGM/CPU%u/RZ/DynMap/SetFilledPct000..09", "00-09% filled (RC: min(set-size, dynmap-size))");
2003 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[1], "/PGM/CPU%u/RZ/DynMap/SetFilledPct010..19", "10-19% filled (RC: min(set-size, dynmap-size))");
2004 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[2], "/PGM/CPU%u/RZ/DynMap/SetFilledPct020..29", "20-29% filled (RC: min(set-size, dynmap-size))");
2005 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[3], "/PGM/CPU%u/RZ/DynMap/SetFilledPct030..39", "30-39% filled (RC: min(set-size, dynmap-size))");
2006 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[4], "/PGM/CPU%u/RZ/DynMap/SetFilledPct040..49", "40-49% filled (RC: min(set-size, dynmap-size))");
2007 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[5], "/PGM/CPU%u/RZ/DynMap/SetFilledPct050..59", "50-59% filled (RC: min(set-size, dynmap-size))");
2008 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[6], "/PGM/CPU%u/RZ/DynMap/SetFilledPct060..69", "60-69% filled (RC: min(set-size, dynmap-size))");
2009 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[7], "/PGM/CPU%u/RZ/DynMap/SetFilledPct070..79", "70-79% filled (RC: min(set-size, dynmap-size))");
2010 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[8], "/PGM/CPU%u/RZ/DynMap/SetFilledPct080..89", "80-89% filled (RC: min(set-size, dynmap-size))");
2011 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[9], "/PGM/CPU%u/RZ/DynMap/SetFilledPct090..99", "90-99% filled (RC: min(set-size, dynmap-size))");
2012 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[10], "/PGM/CPU%u/RZ/DynMap/SetFilledPct100", "100% filled (RC: min(set-size, dynmap-size))");
2013
2014 /* HC only: */
2015
2016 /* RZ & R3: */
2017 PGM_REG_PROFILE(&pCpuStats->StatRZSyncCR3, "/PGM/CPU%u/RZ/SyncCR3", "Profiling of the PGMSyncCR3() body.");
2018 PGM_REG_PROFILE(&pCpuStats->StatRZSyncCR3Handlers, "/PGM/CPU%u/RZ/SyncCR3/Handlers", "Profiling of the PGMSyncCR3() update handler section.");
2019 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3Global, "/PGM/CPU%u/RZ/SyncCR3/Global", "The number of global CR3 syncs.");
2020 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3NotGlobal, "/PGM/CPU%u/RZ/SyncCR3/NotGlobal", "The number of non-global CR3 syncs.");
2021 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstCacheHit, "/PGM/CPU%u/RZ/SyncCR3/DstChacheHit", "The number of times we got some kind of a cache hit.");
2022 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstFreed, "/PGM/CPU%u/RZ/SyncCR3/DstFreed", "The number of times we've had to free a shadow entry.");
2023 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstFreedSrcNP, "/PGM/CPU%u/RZ/SyncCR3/DstFreedSrcNP", "The number of times we've had to free a shadow entry for which the source entry was not present.");
2024 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstNotPresent, "/PGM/CPU%u/RZ/SyncCR3/DstNotPresent", "The number of times we've encountered a not present shadow entry for a present guest entry.");
2025 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstSkippedGlobalPD, "/PGM/CPU%u/RZ/SyncCR3/DstSkippedGlobalPD", "The number of times a global page directory wasn't flushed.");
2026 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstSkippedGlobalPT, "/PGM/CPU%u/RZ/SyncCR3/DstSkippedGlobalPT", "The number of times a page table with only global entries wasn't flushed.");
2027 PGM_REG_PROFILE(&pCpuStats->StatRZSyncPT, "/PGM/CPU%u/RZ/SyncPT", "Profiling of the pfnSyncPT() body.");
2028 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPTFailed, "/PGM/CPU%u/RZ/SyncPT/Failed", "The number of times pfnSyncPT() failed.");
2029 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPT4K, "/PGM/CPU%u/RZ/SyncPT/4K", "Nr of 4K PT syncs");
2030 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPT4M, "/PGM/CPU%u/RZ/SyncPT/4M", "Nr of 4M PT syncs");
2031 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPagePDNAs, "/PGM/CPU%u/RZ/SyncPagePDNAs", "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
2032 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPagePDOutOfSync, "/PGM/CPU%u/RZ/SyncPagePDOutOfSync", "The number of time we've encountered an out-of-sync PD in SyncPage.");
2033 PGM_REG_COUNTER(&pCpuStats->StatRZAccessedPage, "/PGM/CPU%u/RZ/AccessedPage", "The number of pages marked not present for accessed bit emulation.");
2034 PGM_REG_PROFILE(&pCpuStats->StatRZDirtyBitTracking, "/PGM/CPU%u/RZ/DirtyPage", "Profiling the dirty bit tracking in CheckPageFault().");
2035 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPage, "/PGM/CPU%u/RZ/DirtyPage/Mark", "The number of pages marked read-only for dirty bit tracking.");
2036 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageBig, "/PGM/CPU%u/RZ/DirtyPage/MarkBig", "The number of 4MB pages marked read-only for dirty bit tracking.");
2037 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageSkipped, "/PGM/CPU%u/RZ/DirtyPage/Skipped", "The number of pages already dirty or readonly.");
2038 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageTrap, "/PGM/CPU%u/RZ/DirtyPage/Trap", "The number of traps generated for dirty bit tracking.");
2039 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageStale, "/PGM/CPU%u/RZ/DirtyPage/Stale", "The number of traps generated for dirty bit tracking (stale tlb entries).");
2040 PGM_REG_COUNTER(&pCpuStats->StatRZDirtiedPage, "/PGM/CPU%u/RZ/DirtyPage/SetDirty", "The number of pages marked dirty because of write accesses.");
2041 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyTrackRealPF, "/PGM/CPU%u/RZ/DirtyPage/RealPF", "The number of real pages faults during dirty bit tracking.");
2042 PGM_REG_COUNTER(&pCpuStats->StatRZPageAlreadyDirty, "/PGM/CPU%u/RZ/DirtyPage/AlreadySet", "The number of pages already marked dirty because of write accesses.");
2043 PGM_REG_PROFILE(&pCpuStats->StatRZInvalidatePage, "/PGM/CPU%u/RZ/InvalidatePage", "PGMInvalidatePage() profiling.");
2044 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePage4KBPages, "/PGM/CPU%u/RZ/InvalidatePage/4KBPages", "The number of times PGMInvalidatePage() was called for a 4KB page.");
2045 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePage4MBPages, "/PGM/CPU%u/RZ/InvalidatePage/4MBPages", "The number of times PGMInvalidatePage() was called for a 4MB page.");
2046 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePage4MBPagesSkip, "/PGM/CPU%u/RZ/InvalidatePage/4MBPagesSkip","The number of times PGMInvalidatePage() skipped a 4MB page.");
2047 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDMappings, "/PGM/CPU%u/RZ/InvalidatePage/PDMappings", "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
2048 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDNAs, "/PGM/CPU%u/RZ/InvalidatePage/PDNAs", "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
2049 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDNPs, "/PGM/CPU%u/RZ/InvalidatePage/PDNPs", "The number of times PGMInvalidatePage() was called for a not present page directory.");
2050 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDOutOfSync, "/PGM/CPU%u/RZ/InvalidatePage/PDOutOfSync", "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
2051 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePageSkipped, "/PGM/CPU%u/RZ/InvalidatePage/Skipped", "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
2052 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncSupervisor, "/PGM/CPU%u/RZ/OutOfSync/SuperVisor", "Number of traps due to pages out of sync (P) and times VerifyAccessSyncPage calls SyncPage.");
2053 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncUser, "/PGM/CPU%u/RZ/OutOfSync/User", "Number of traps due to pages out of sync (P) and times VerifyAccessSyncPage calls SyncPage.");
2054 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncSupervisorWrite,"/PGM/CPU%u/RZ/OutOfSync/SuperVisorWrite", "Number of traps due to pages out of sync (RW) and times VerifyAccessSyncPage calls SyncPage.");
2055 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncUserWrite, "/PGM/CPU%u/RZ/OutOfSync/UserWrite", "Number of traps due to pages out of sync (RW) and times VerifyAccessSyncPage calls SyncPage.");
2056 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncBallloon, "/PGM/CPU%u/RZ/OutOfSync/Balloon", "The number of times a ballooned page was accessed (read).");
2057 PGM_REG_PROFILE(&pCpuStats->StatRZPrefetch, "/PGM/CPU%u/RZ/Prefetch", "PGMPrefetchPage profiling.");
2058 PGM_REG_PROFILE(&pCpuStats->StatRZFlushTLB, "/PGM/CPU%u/RZ/FlushTLB", "Profiling of the PGMFlushTLB() body.");
2059 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBNewCR3, "/PGM/CPU%u/RZ/FlushTLB/NewCR3", "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
2060 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBNewCR3Global, "/PGM/CPU%u/RZ/FlushTLB/NewCR3Global", "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
2061 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBSameCR3, "/PGM/CPU%u/RZ/FlushTLB/SameCR3", "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
2062 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBSameCR3Global, "/PGM/CPU%u/RZ/FlushTLB/SameCR3Global", "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
2063 PGM_REG_PROFILE(&pCpuStats->StatRZGstModifyPage, "/PGM/CPU%u/RZ/GstModifyPage", "Profiling of the PGMGstModifyPage() body.");
2064
2065 PGM_REG_PROFILE(&pCpuStats->StatR3SyncCR3, "/PGM/CPU%u/R3/SyncCR3", "Profiling of the PGMSyncCR3() body.");
2066 PGM_REG_PROFILE(&pCpuStats->StatR3SyncCR3Handlers, "/PGM/CPU%u/R3/SyncCR3/Handlers", "Profiling of the PGMSyncCR3() update handler section.");
2067 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3Global, "/PGM/CPU%u/R3/SyncCR3/Global", "The number of global CR3 syncs.");
2068 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3NotGlobal, "/PGM/CPU%u/R3/SyncCR3/NotGlobal", "The number of non-global CR3 syncs.");
2069 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstCacheHit, "/PGM/CPU%u/R3/SyncCR3/DstChacheHit", "The number of times we got some kind of a cache hit.");
2070 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstFreed, "/PGM/CPU%u/R3/SyncCR3/DstFreed", "The number of times we've had to free a shadow entry.");
2071 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstFreedSrcNP, "/PGM/CPU%u/R3/SyncCR3/DstFreedSrcNP", "The number of times we've had to free a shadow entry for which the source entry was not present.");
2072 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstNotPresent, "/PGM/CPU%u/R3/SyncCR3/DstNotPresent", "The number of times we've encountered a not present shadow entry for a present guest entry.");
2073 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstSkippedGlobalPD, "/PGM/CPU%u/R3/SyncCR3/DstSkippedGlobalPD", "The number of times a global page directory wasn't flushed.");
2074 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstSkippedGlobalPT, "/PGM/CPU%u/R3/SyncCR3/DstSkippedGlobalPT", "The number of times a page table with only global entries wasn't flushed.");
2075 PGM_REG_PROFILE(&pCpuStats->StatR3SyncPT, "/PGM/CPU%u/R3/SyncPT", "Profiling of the pfnSyncPT() body.");
2076 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPTFailed, "/PGM/CPU%u/R3/SyncPT/Failed", "The number of times pfnSyncPT() failed.");
2077 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPT4K, "/PGM/CPU%u/R3/SyncPT/4K", "Nr of 4K PT syncs");
2078 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPT4M, "/PGM/CPU%u/R3/SyncPT/4M", "Nr of 4M PT syncs");
2079 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPagePDNAs, "/PGM/CPU%u/R3/SyncPagePDNAs", "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
2080 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPagePDOutOfSync, "/PGM/CPU%u/R3/SyncPagePDOutOfSync", "The number of time we've encountered an out-of-sync PD in SyncPage.");
2081 PGM_REG_COUNTER(&pCpuStats->StatR3AccessedPage, "/PGM/CPU%u/R3/AccessedPage", "The number of pages marked not present for accessed bit emulation.");
2082 PGM_REG_PROFILE(&pCpuStats->StatR3DirtyBitTracking, "/PGM/CPU%u/R3/DirtyPage", "Profiling the dirty bit tracking in CheckPageFault().");
2083 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPage, "/PGM/CPU%u/R3/DirtyPage/Mark", "The number of pages marked read-only for dirty bit tracking.");
2084 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPageBig, "/PGM/CPU%u/R3/DirtyPage/MarkBig", "The number of 4MB pages marked read-only for dirty bit tracking.");
2085 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPageSkipped, "/PGM/CPU%u/R3/DirtyPage/Skipped", "The number of pages already dirty or readonly.");
2086 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPageTrap, "/PGM/CPU%u/R3/DirtyPage/Trap", "The number of traps generated for dirty bit tracking.");
2087 PGM_REG_COUNTER(&pCpuStats->StatR3DirtiedPage, "/PGM/CPU%u/R3/DirtyPage/SetDirty", "The number of pages marked dirty because of write accesses.");
2088 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyTrackRealPF, "/PGM/CPU%u/R3/DirtyPage/RealPF", "The number of real pages faults during dirty bit tracking.");
2089 PGM_REG_COUNTER(&pCpuStats->StatR3PageAlreadyDirty, "/PGM/CPU%u/R3/DirtyPage/AlreadySet", "The number of pages already marked dirty because of write accesses.");
2090 PGM_REG_PROFILE(&pCpuStats->StatR3InvalidatePage, "/PGM/CPU%u/R3/InvalidatePage", "PGMInvalidatePage() profiling.");
2091 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePage4KBPages, "/PGM/CPU%u/R3/InvalidatePage/4KBPages", "The number of times PGMInvalidatePage() was called for a 4KB page.");
2092 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePage4MBPages, "/PGM/CPU%u/R3/InvalidatePage/4MBPages", "The number of times PGMInvalidatePage() was called for a 4MB page.");
2093 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePage4MBPagesSkip, "/PGM/CPU%u/R3/InvalidatePage/4MBPagesSkip","The number of times PGMInvalidatePage() skipped a 4MB page.");
2094 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDMappings, "/PGM/CPU%u/R3/InvalidatePage/PDMappings", "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
2095 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDNAs, "/PGM/CPU%u/R3/InvalidatePage/PDNAs", "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
2096 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDNPs, "/PGM/CPU%u/R3/InvalidatePage/PDNPs", "The number of times PGMInvalidatePage() was called for a not present page directory.");
2097 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDOutOfSync, "/PGM/CPU%u/R3/InvalidatePage/PDOutOfSync", "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
2098 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePageSkipped, "/PGM/CPU%u/R3/InvalidatePage/Skipped", "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
2099 PGM_REG_COUNTER(&pCpuStats->StatR3PageOutOfSyncSupervisor, "/PGM/CPU%u/R3/OutOfSync/SuperVisor", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
2100 PGM_REG_COUNTER(&pCpuStats->StatR3PageOutOfSyncUser, "/PGM/CPU%u/R3/OutOfSync/User", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
2101 PGM_REG_COUNTER(&pCpuStats->StatR3PageOutOfSyncBallloon, "/PGM/CPU%u/R3/OutOfSync/Balloon", "The number of times a ballooned page was accessed (read).");
2102 PGM_REG_PROFILE(&pCpuStats->StatR3Prefetch, "/PGM/CPU%u/R3/Prefetch", "PGMPrefetchPage profiling.");
2103 PGM_REG_PROFILE(&pCpuStats->StatR3FlushTLB, "/PGM/CPU%u/R3/FlushTLB", "Profiling of the PGMFlushTLB() body.");
2104 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBNewCR3, "/PGM/CPU%u/R3/FlushTLB/NewCR3", "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
2105 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBNewCR3Global, "/PGM/CPU%u/R3/FlushTLB/NewCR3Global", "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
2106 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBSameCR3, "/PGM/CPU%u/R3/FlushTLB/SameCR3", "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
2107 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBSameCR3Global, "/PGM/CPU%u/R3/FlushTLB/SameCR3Global", "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
2108 PGM_REG_PROFILE(&pCpuStats->StatR3GstModifyPage, "/PGM/CPU%u/R3/GstModifyPage", "Profiling of the PGMGstModifyPage() body.");
2109#endif /* VBOX_WITH_STATISTICS */
2110
2111#undef PGM_REG_PROFILE
2112#undef PGM_REG_COUNTER
2113
2114 }
2115
2116 return VINF_SUCCESS;
2117}
2118
2119
2120/**
2121 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
2122 *
2123 * The dynamic mapping area will also be allocated and initialized at this
2124 * time. We could allocate it during PGMR3Init of course, but the mapping
2125 * wouldn't be allocated at that time preventing us from setting up the
2126 * page table entries with the dummy page.
2127 *
2128 * @returns VBox status code.
2129 * @param pVM The cross context VM structure.
2130 */
2131VMMR3DECL(int) PGMR3InitDynMap(PVM pVM)
2132{
2133 RTGCPTR GCPtr;
2134 int rc;
2135
2136 /*
2137 * Reserve space for the dynamic mappings.
2138 */
2139 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &GCPtr);
2140 if (RT_SUCCESS(rc))
2141 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
2142
2143 if ( RT_SUCCESS(rc)
2144 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT))
2145 {
2146 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &GCPtr);
2147 if (RT_SUCCESS(rc))
2148 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
2149 }
2150 if (RT_SUCCESS(rc))
2151 {
2152 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT));
2153 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
2154 }
2155 return rc;
2156}
2157
2158
2159/**
2160 * Ring-3 init finalizing.
2161 *
2162 * @returns VBox status code.
2163 * @param pVM The cross context VM structure.
2164 */
2165VMMR3DECL(int) PGMR3InitFinalize(PVM pVM)
2166{
2167 int rc;
2168
2169 /*
2170 * Reserve space for the dynamic mappings.
2171 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
2172 */
2173 /* get the pointer to the page table entries. */
2174 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
2175 AssertRelease(pMapping);
2176 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
2177 const unsigned iPT = off >> X86_PD_SHIFT;
2178 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
2179 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTRC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
2180 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsRC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
2181
2182 /* init cache area */
2183 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
2184 for (uint32_t offDynMap = 0; offDynMap < MM_HYPER_DYNAMIC_SIZE; offDynMap += PAGE_SIZE)
2185 {
2186 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + offDynMap, HCPhysDummy, PAGE_SIZE, 0);
2187 AssertRCReturn(rc, rc);
2188 }
2189
2190 /*
2191 * Determine the max physical address width (MAXPHYADDR) and apply it to
2192 * all the mask members and stuff.
2193 */
2194 uint32_t cMaxPhysAddrWidth;
2195 uint32_t uMaxExtLeaf = ASMCpuId_EAX(0x80000000);
2196 if ( uMaxExtLeaf >= 0x80000008
2197 && uMaxExtLeaf <= 0x80000fff)
2198 {
2199 cMaxPhysAddrWidth = ASMCpuId_EAX(0x80000008) & 0xff;
2200 LogRel(("PGM: The CPU physical address width is %u bits\n", cMaxPhysAddrWidth));
2201 cMaxPhysAddrWidth = RT_MIN(52, cMaxPhysAddrWidth);
2202 pVM->pgm.s.fLessThan52PhysicalAddressBits = cMaxPhysAddrWidth < 52;
2203 for (uint32_t iBit = cMaxPhysAddrWidth; iBit < 52; iBit++)
2204 pVM->pgm.s.HCPhysInvMmioPg |= RT_BIT_64(iBit);
2205 }
2206 else
2207 {
2208 LogRel(("PGM: ASSUMING CPU physical address width of 48 bits (uMaxExtLeaf=%#x)\n", uMaxExtLeaf));
2209 cMaxPhysAddrWidth = 48;
2210 pVM->pgm.s.fLessThan52PhysicalAddressBits = true;
2211 pVM->pgm.s.HCPhysInvMmioPg |= UINT64_C(0x000f0000000000);
2212 }
2213
2214 /** @todo query from CPUM. */
2215 pVM->pgm.s.GCPhysInvAddrMask = 0;
2216 for (uint32_t iBit = cMaxPhysAddrWidth; iBit < 64; iBit++)
2217 pVM->pgm.s.GCPhysInvAddrMask |= RT_BIT_64(iBit);
2218
2219 /*
2220 * Initialize the invalid paging entry masks, assuming NX is disabled.
2221 */
2222 uint64_t fMbzPageFrameMask = pVM->pgm.s.GCPhysInvAddrMask & UINT64_C(0x000ffffffffff000);
2223 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2224 {
2225 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2226
2227 /** @todo The manuals are not entirely clear whether the physical
2228 * address width is relevant. See table 5-9 in the intel
2229 * manual vs the PDE4M descriptions. Write testcase (NP). */
2230 pVCpu->pgm.s.fGst32BitMbzBigPdeMask = ((uint32_t)(fMbzPageFrameMask >> (32 - 13)) & X86_PDE4M_PG_HIGH_MASK)
2231 | X86_PDE4M_MBZ_MASK;
2232
2233 pVCpu->pgm.s.fGstPaeMbzPteMask = fMbzPageFrameMask | X86_PTE_PAE_MBZ_MASK_NO_NX;
2234 pVCpu->pgm.s.fGstPaeMbzPdeMask = fMbzPageFrameMask | X86_PDE_PAE_MBZ_MASK_NO_NX;
2235 pVCpu->pgm.s.fGstPaeMbzBigPdeMask = fMbzPageFrameMask | X86_PDE2M_PAE_MBZ_MASK_NO_NX;
2236 pVCpu->pgm.s.fGstPaeMbzPdpeMask = fMbzPageFrameMask | X86_PDPE_PAE_MBZ_MASK;
2237
2238 pVCpu->pgm.s.fGstAmd64MbzPteMask = fMbzPageFrameMask | X86_PTE_LM_MBZ_MASK_NO_NX;
2239 pVCpu->pgm.s.fGstAmd64MbzPdeMask = fMbzPageFrameMask | X86_PDE_LM_MBZ_MASK_NX;
2240 pVCpu->pgm.s.fGstAmd64MbzBigPdeMask = fMbzPageFrameMask | X86_PDE2M_LM_MBZ_MASK_NX;
2241 pVCpu->pgm.s.fGstAmd64MbzPdpeMask = fMbzPageFrameMask | X86_PDPE_LM_MBZ_MASK_NO_NX;
2242 pVCpu->pgm.s.fGstAmd64MbzBigPdpeMask = fMbzPageFrameMask | X86_PDPE1G_LM_MBZ_MASK_NO_NX;
2243 pVCpu->pgm.s.fGstAmd64MbzPml4eMask = fMbzPageFrameMask | X86_PML4E_MBZ_MASK_NO_NX;
2244
2245 pVCpu->pgm.s.fGst64ShadowedPteMask = X86_PTE_P | X86_PTE_RW | X86_PTE_US | X86_PTE_G | X86_PTE_A | X86_PTE_D;
2246 pVCpu->pgm.s.fGst64ShadowedPdeMask = X86_PDE_P | X86_PDE_RW | X86_PDE_US | X86_PDE_A;
2247 pVCpu->pgm.s.fGst64ShadowedBigPdeMask = X86_PDE4M_P | X86_PDE4M_RW | X86_PDE4M_US | X86_PDE4M_A;
2248 pVCpu->pgm.s.fGst64ShadowedBigPde4PteMask =
2249 X86_PDE4M_P | X86_PDE4M_RW | X86_PDE4M_US | X86_PDE4M_G | X86_PDE4M_A | X86_PDE4M_D;
2250 pVCpu->pgm.s.fGstAmd64ShadowedPdpeMask = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A;
2251 pVCpu->pgm.s.fGstAmd64ShadowedPml4eMask = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A;
2252 }
2253
2254 /*
2255 * Note that AMD uses all the 8 reserved bits for the address (so 40 bits in total);
2256 * Intel only goes up to 36 bits, so we stick to 36 as well.
2257 * Update: More recent intel manuals specifies 40 bits just like AMD.
2258 */
2259 uint32_t u32Dummy, u32Features;
2260 CPUMGetGuestCpuId(VMMGetCpu(pVM), 1, 0, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
2261 if (u32Features & X86_CPUID_FEATURE_EDX_PSE36)
2262 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(RT_MAX(36, cMaxPhysAddrWidth)) - 1;
2263 else
2264 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1;
2265
2266 /*
2267 * Allocate memory if we're supposed to do that.
2268 */
2269 if (pVM->pgm.s.fRamPreAlloc)
2270 rc = pgmR3PhysRamPreAllocate(pVM);
2271
2272 LogRel(("PGM: PGMR3InitFinalize: 4 MB PSE mask %RGp\n", pVM->pgm.s.GCPhys4MBPSEMask));
2273 return rc;
2274}
2275
2276
2277/**
2278 * Init phase completed callback.
2279 *
2280 * @returns VBox status code.
2281 * @param pVM The cross context VM structure.
2282 * @param enmWhat What has been completed.
2283 * @thread EMT(0)
2284 */
2285VMMR3_INT_DECL(int) PGMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
2286{
2287 switch (enmWhat)
2288 {
2289 case VMINITCOMPLETED_HM:
2290#ifdef VBOX_WITH_PCI_PASSTHROUGH
2291 if (pVM->pgm.s.fPciPassthrough)
2292 {
2293 AssertLogRelReturn(pVM->pgm.s.fRamPreAlloc, VERR_PCI_PASSTHROUGH_NO_RAM_PREALLOC);
2294 AssertLogRelReturn(HMIsEnabled(pVM), VERR_PCI_PASSTHROUGH_NO_HM);
2295 AssertLogRelReturn(HMIsNestedPagingActive(pVM), VERR_PCI_PASSTHROUGH_NO_NESTED_PAGING);
2296
2297 /*
2298 * Report assignments to the IOMMU (hope that's good enough for now).
2299 */
2300 if (pVM->pgm.s.fPciPassthrough)
2301 {
2302 int rc = VMMR3CallR0(pVM, VMMR0_DO_PGM_PHYS_SETUP_IOMMU, 0, NULL);
2303 AssertRCReturn(rc, rc);
2304 }
2305 }
2306#else
2307 AssertLogRelReturn(!pVM->pgm.s.fPciPassthrough, VERR_PGM_PCI_PASSTHRU_MISCONFIG);
2308#endif
2309 break;
2310
2311 default:
2312 /* shut up gcc */
2313 break;
2314 }
2315
2316 return VINF_SUCCESS;
2317}
2318
2319
2320/**
2321 * Applies relocations to data and code managed by this component.
2322 *
2323 * This function will be called at init and whenever the VMM need to relocate it
2324 * self inside the GC.
2325 *
2326 * @param pVM The cross context VM structure.
2327 * @param offDelta Relocation delta relative to old location.
2328 */
2329VMMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
2330{
2331 LogFlow(("PGMR3Relocate %RGv to %RGv\n", pVM->pgm.s.GCPtrCR3Mapping, pVM->pgm.s.GCPtrCR3Mapping + offDelta));
2332
2333 /*
2334 * Paging stuff.
2335 */
2336 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
2337
2338 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
2339
2340 /* Shadow, guest and both mode switch & relocation for each VCPU. */
2341 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2342 {
2343 PVMCPU pVCpu = &pVM->aCpus[i];
2344
2345 pgmR3ModeDataSwitch(pVM, pVCpu, pVCpu->pgm.s.enmShadowMode, pVCpu->pgm.s.enmGuestMode);
2346
2347 PGM_SHW_PFN(Relocate, pVCpu)(pVCpu, offDelta);
2348 PGM_GST_PFN(Relocate, pVCpu)(pVCpu, offDelta);
2349 PGM_BTH_PFN(Relocate, pVCpu)(pVCpu, offDelta);
2350 }
2351
2352 /*
2353 * Trees.
2354 */
2355 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
2356
2357 /*
2358 * Ram ranges.
2359 */
2360 if (pVM->pgm.s.pRamRangesXR3)
2361 {
2362 /* Update the pSelfRC pointers and relink them. */
2363 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesXR3; pCur; pCur = pCur->pNextR3)
2364 if (!(pCur->fFlags & PGM_RAM_RANGE_FLAGS_FLOATING))
2365 pCur->pSelfRC = MMHyperCCToRC(pVM, pCur);
2366 pgmR3PhysRelinkRamRanges(pVM);
2367
2368 /* Flush the RC TLB. */
2369 for (unsigned i = 0; i < PGM_RAMRANGE_TLB_ENTRIES; i++)
2370 pVM->pgm.s.apRamRangesTlbRC[i] = NIL_RTRCPTR;
2371 }
2372
2373 /*
2374 * Update the pSelfRC pointer of the MMIO2 ram ranges since they might not
2375 * be mapped and thus not included in the above exercise.
2376 */
2377 for (PPGMMMIO2RANGE pCur = pVM->pgm.s.pMmio2RangesR3; pCur; pCur = pCur->pNextR3)
2378 if (!(pCur->RamRange.fFlags & PGM_RAM_RANGE_FLAGS_FLOATING))
2379 pCur->RamRange.pSelfRC = MMHyperCCToRC(pVM, &pCur->RamRange);
2380
2381 /*
2382 * Update the two page directories with all page table mappings.
2383 * (One or more of them have changed, that's why we're here.)
2384 */
2385 pVM->pgm.s.pMappingsRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pMappingsR3);
2386 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
2387 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2388
2389 /* Relocate GC addresses of Page Tables. */
2390 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
2391 {
2392 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
2393 {
2394 pCur->aPTs[i].pPTRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].pPTR3);
2395 pCur->aPTs[i].paPaePTsRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].paPaePTsR3);
2396 }
2397 }
2398
2399 /*
2400 * Dynamic page mapping area.
2401 */
2402 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
2403 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
2404 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
2405
2406 if (pVM->pgm.s.pRCDynMap)
2407 {
2408 pVM->pgm.s.pRCDynMap += offDelta;
2409 PPGMRCDYNMAP pDynMap = (PPGMRCDYNMAP)MMHyperRCToCC(pVM, pVM->pgm.s.pRCDynMap);
2410
2411 pDynMap->paPages += offDelta;
2412 PPGMRCDYNMAPENTRY paPages = (PPGMRCDYNMAPENTRY)MMHyperRCToCC(pVM, pDynMap->paPages);
2413
2414 for (uint32_t iPage = 0; iPage < pDynMap->cPages; iPage++)
2415 {
2416 paPages[iPage].pvPage += offDelta;
2417 paPages[iPage].uPte.pLegacy += offDelta;
2418 paPages[iPage].uPte.pPae += offDelta;
2419 }
2420 }
2421
2422 /*
2423 * The Zero page.
2424 */
2425 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
2426#ifdef VBOX_WITH_2X_4GB_ADDR_SPACE
2427 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR || !HMIsEnabled(pVM));
2428#else
2429 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR);
2430#endif
2431
2432 /*
2433 * Physical and virtual handlers.
2434 */
2435 PGMRELOCHANDLERARGS Args = { offDelta, pVM };
2436 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3RelocatePhysHandler, &Args);
2437 pVM->pgm.s.pLastPhysHandlerRC = NIL_RTRCPTR;
2438
2439 PPGMPHYSHANDLERTYPEINT pCurPhysType;
2440 RTListOff32ForEach(&pVM->pgm.s.pTreesR3->HeadPhysHandlerTypes, pCurPhysType, PGMPHYSHANDLERTYPEINT, ListNode)
2441 {
2442 if (pCurPhysType->pfnHandlerRC != NIL_RTRCPTR)
2443 pCurPhysType->pfnHandlerRC += offDelta;
2444 if (pCurPhysType->pfnPfHandlerRC != NIL_RTRCPTR)
2445 pCurPhysType->pfnPfHandlerRC += offDelta;
2446 }
2447
2448#ifdef VBOX_WITH_RAW_MODE
2449 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3RelocateVirtHandler, &Args);
2450 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &Args);
2451
2452 PPGMVIRTHANDLERTYPEINT pCurVirtType;
2453 RTListOff32ForEach(&pVM->pgm.s.pTreesR3->HeadVirtHandlerTypes, pCurVirtType, PGMVIRTHANDLERTYPEINT, ListNode)
2454 {
2455 if (pCurVirtType->pfnHandlerRC != NIL_RTRCPTR)
2456 pCurVirtType->pfnHandlerRC += offDelta;
2457 if (pCurVirtType->pfnPfHandlerRC != NIL_RTRCPTR)
2458 pCurVirtType->pfnPfHandlerRC += offDelta;
2459 }
2460#endif
2461
2462 /*
2463 * The page pool.
2464 */
2465 pgmR3PoolRelocate(pVM);
2466
2467#ifdef VBOX_WITH_STATISTICS
2468 /*
2469 * Statistics.
2470 */
2471 pVM->pgm.s.pStatsRC = MMHyperCCToRC(pVM, pVM->pgm.s.pStatsR3);
2472 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2473 pVM->aCpus[iCpu].pgm.s.pStatsRC = MMHyperCCToRC(pVM, pVM->aCpus[iCpu].pgm.s.pStatsR3);
2474#endif
2475}
2476
2477
2478/**
2479 * Callback function for relocating a physical access handler.
2480 *
2481 * @returns 0 (continue enum)
2482 * @param pNode Pointer to a PGMPHYSHANDLER node.
2483 * @param pvUser Pointer to a PGMRELOCHANDLERARGS.
2484 */
2485static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
2486{
2487 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
2488 PCPGMRELOCHANDLERARGS pArgs = (PCPGMRELOCHANDLERARGS)pvUser;
2489 if (pHandler->pvUserRC >= 0x10000)
2490 pHandler->pvUserRC += pArgs->offDelta;
2491 return 0;
2492}
2493
2494#ifdef VBOX_WITH_RAW_MODE
2495
2496/**
2497 * Callback function for relocating a virtual access handler.
2498 *
2499 * @returns 0 (continue enum)
2500 * @param pNode Pointer to a PGMVIRTHANDLER node.
2501 * @param pvUser Pointer to a PGMRELOCHANDLERARGS.
2502 */
2503static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2504{
2505 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2506 PCPGMRELOCHANDLERARGS pArgs = (PCPGMRELOCHANDLERARGS)pvUser;
2507 Assert(PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->enmKind != PGMVIRTHANDLERKIND_HYPERVISOR);
2508
2509 if ( pHandler->pvUserRC != NIL_RTRCPTR
2510 && PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->fRelocUserRC)
2511 pHandler->pvUserRC += pArgs->offDelta;
2512 return 0;
2513}
2514
2515
2516/**
2517 * Callback function for relocating a virtual access handler for the hypervisor mapping.
2518 *
2519 * @returns 0 (continue enum)
2520 * @param pNode Pointer to a PGMVIRTHANDLER node.
2521 * @param pvUser Pointer to a PGMRELOCHANDLERARGS.
2522 */
2523static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2524{
2525 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2526 PCPGMRELOCHANDLERARGS pArgs = (PCPGMRELOCHANDLERARGS)pvUser;
2527 Assert(PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->enmKind == PGMVIRTHANDLERKIND_HYPERVISOR);
2528
2529 if ( pHandler->pvUserRC != NIL_RTRCPTR
2530 && PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->fRelocUserRC)
2531 pHandler->pvUserRC += pArgs->offDelta;
2532 return 0;
2533}
2534
2535#endif /* VBOX_WITH_RAW_MODE */
2536
2537/**
2538 * Resets a virtual CPU when unplugged.
2539 *
2540 * @param pVM The cross context VM structure.
2541 * @param pVCpu The cross context virtual CPU structure.
2542 */
2543VMMR3DECL(void) PGMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
2544{
2545 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
2546 AssertRC(rc);
2547
2548 rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
2549 AssertRC(rc);
2550
2551 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cGuestModeChanges);
2552
2553 pgmR3PoolResetUnpluggedCpu(pVM, pVCpu);
2554
2555 /*
2556 * Re-init other members.
2557 */
2558 pVCpu->pgm.s.fA20Enabled = true;
2559 pVCpu->pgm.s.GCPhysA20Mask = ~((RTGCPHYS)!pVCpu->pgm.s.fA20Enabled << 20);
2560
2561 /*
2562 * Clear the FFs PGM owns.
2563 */
2564 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2565 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3_NON_GLOBAL);
2566}
2567
2568
2569/**
2570 * The VM is being reset.
2571 *
2572 * For the PGM component this means that any PD write monitors
2573 * needs to be removed.
2574 *
2575 * @param pVM The cross context VM structure.
2576 */
2577VMMR3_INT_DECL(void) PGMR3Reset(PVM pVM)
2578{
2579 LogFlow(("PGMR3Reset:\n"));
2580 VM_ASSERT_EMT(pVM);
2581
2582 pgmLock(pVM);
2583
2584 /*
2585 * Unfix any fixed mappings and disable CR3 monitoring.
2586 */
2587 pVM->pgm.s.fMappingsFixed = false;
2588 pVM->pgm.s.fMappingsFixedRestored = false;
2589 pVM->pgm.s.GCPtrMappingFixed = NIL_RTGCPTR;
2590 pVM->pgm.s.cbMappingFixed = 0;
2591
2592 /*
2593 * Exit the guest paging mode before the pgm pool gets reset.
2594 * Important to clean up the amd64 case.
2595 */
2596 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2597 {
2598 PVMCPU pVCpu = &pVM->aCpus[i];
2599 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
2600 AssertReleaseRC(rc);
2601 }
2602
2603#ifdef DEBUG
2604 DBGFR3_INFO_LOG(pVM, "mappings", NULL);
2605 DBGFR3_INFO_LOG(pVM, "handlers", "all nostat");
2606#endif
2607
2608 /*
2609 * Switch mode back to real mode. (before resetting the pgm pool!)
2610 */
2611 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2612 {
2613 PVMCPU pVCpu = &pVM->aCpus[i];
2614
2615 int rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
2616 AssertReleaseRC(rc);
2617
2618 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cGuestModeChanges);
2619 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cA20Changes);
2620 }
2621
2622 /*
2623 * Reset the shadow page pool.
2624 */
2625 pgmR3PoolReset(pVM);
2626
2627 /*
2628 * Re-init various other members and clear the FFs that PGM owns.
2629 */
2630 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2631 {
2632 PVMCPU pVCpu = &pVM->aCpus[i];
2633
2634 pVCpu->pgm.s.fGst32BitPageSizeExtension = false;
2635 PGMNotifyNxeChanged(pVCpu, false);
2636
2637 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2638 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3_NON_GLOBAL);
2639
2640 if (!pVCpu->pgm.s.fA20Enabled)
2641 {
2642 pVCpu->pgm.s.fA20Enabled = true;
2643 pVCpu->pgm.s.GCPhysA20Mask = ~((RTGCPHYS)!pVCpu->pgm.s.fA20Enabled << 20);
2644#ifdef PGM_WITH_A20
2645 pVCpu->pgm.s.fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
2646 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2647 pgmR3RefreshShadowModeAfterA20Change(pVCpu);
2648 HMFlushTLB(pVCpu);
2649#endif
2650 }
2651 }
2652
2653 pgmUnlock(pVM);
2654}
2655
2656
2657/**
2658 * Memory setup after VM construction or reset.
2659 *
2660 * @param pVM The cross context VM structure.
2661 * @param fAtReset Indicates the context, after reset if @c true or after
2662 * construction if @c false.
2663 */
2664VMMR3_INT_DECL(void) PGMR3MemSetup(PVM pVM, bool fAtReset)
2665{
2666 if (fAtReset)
2667 {
2668 pgmLock(pVM);
2669
2670 int rc = pgmR3PhysRamZeroAll(pVM);
2671 AssertReleaseRC(rc);
2672
2673 rc = pgmR3PhysRomReset(pVM);
2674 AssertReleaseRC(rc);
2675
2676 pgmUnlock(pVM);
2677 }
2678}
2679
2680
2681#ifdef VBOX_STRICT
2682/**
2683 * VM state change callback for clearing fNoMorePhysWrites after
2684 * a snapshot has been created.
2685 */
2686static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PUVM pUVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
2687{
2688 if ( enmState == VMSTATE_RUNNING
2689 || enmState == VMSTATE_RESUMING)
2690 pUVM->pVM->pgm.s.fNoMorePhysWrites = false;
2691 NOREF(enmOldState); NOREF(pvUser);
2692}
2693#endif
2694
2695/**
2696 * Private API to reset fNoMorePhysWrites.
2697 */
2698VMMR3_INT_DECL(void) PGMR3ResetNoMorePhysWritesFlag(PVM pVM)
2699{
2700 pVM->pgm.s.fNoMorePhysWrites = false;
2701}
2702
2703/**
2704 * Terminates the PGM.
2705 *
2706 * @returns VBox status code.
2707 * @param pVM The cross context VM structure.
2708 */
2709VMMR3DECL(int) PGMR3Term(PVM pVM)
2710{
2711 /* Must free shared pages here. */
2712 pgmLock(pVM);
2713 pgmR3PhysRamTerm(pVM);
2714 pgmR3PhysRomTerm(pVM);
2715 pgmUnlock(pVM);
2716
2717 PGMDeregisterStringFormatTypes();
2718 return PDMR3CritSectDelete(&pVM->pgm.s.CritSectX);
2719}
2720
2721
2722/**
2723 * Show paging mode.
2724 *
2725 * @param pVM The cross context VM structure.
2726 * @param pHlp The info helpers.
2727 * @param pszArgs "all" (default), "guest", "shadow" or "host".
2728 */
2729static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2730{
2731 /* digest argument. */
2732 bool fGuest, fShadow, fHost;
2733 if (pszArgs)
2734 pszArgs = RTStrStripL(pszArgs);
2735 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
2736 fShadow = fHost = fGuest = true;
2737 else
2738 {
2739 fShadow = fHost = fGuest = false;
2740 if (strstr(pszArgs, "guest"))
2741 fGuest = true;
2742 if (strstr(pszArgs, "shadow"))
2743 fShadow = true;
2744 if (strstr(pszArgs, "host"))
2745 fHost = true;
2746 }
2747
2748 /** @todo SMP support! */
2749 /* print info. */
2750 if (fGuest)
2751 pHlp->pfnPrintf(pHlp, "Guest paging mode: %s (changed %RU64 times), A20 %s (changed %RU64 times)\n",
2752 PGMGetModeName(pVM->aCpus[0].pgm.s.enmGuestMode), pVM->aCpus[0].pgm.s.cGuestModeChanges.c,
2753 pVM->aCpus[0].pgm.s.fA20Enabled ? "enabled" : "disabled", pVM->aCpus[0].pgm.s.cA20Changes.c);
2754 if (fShadow)
2755 pHlp->pfnPrintf(pHlp, "Shadow paging mode: %s\n", PGMGetModeName(pVM->aCpus[0].pgm.s.enmShadowMode));
2756 if (fHost)
2757 {
2758 const char *psz;
2759 switch (pVM->pgm.s.enmHostMode)
2760 {
2761 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
2762 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
2763 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
2764 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
2765 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
2766 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
2767 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
2768 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
2769 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
2770 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
2771 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
2772 default: psz = "unknown"; break;
2773 }
2774 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
2775 }
2776}
2777
2778
2779/**
2780 * Dump registered MMIO ranges to the log.
2781 *
2782 * @param pVM The cross context VM structure.
2783 * @param pHlp The info helpers.
2784 * @param pszArgs Arguments, ignored.
2785 */
2786static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2787{
2788 NOREF(pszArgs);
2789 pHlp->pfnPrintf(pHlp,
2790 "RAM ranges (pVM=%p)\n"
2791 "%.*s %.*s\n",
2792 pVM,
2793 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
2794 sizeof(RTHCPTR) * 2, "pvHC ");
2795
2796 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesXR3; pCur; pCur = pCur->pNextR3)
2797 pHlp->pfnPrintf(pHlp,
2798 "%RGp-%RGp %RHv %s\n",
2799 pCur->GCPhys,
2800 pCur->GCPhysLast,
2801 pCur->pvR3,
2802 pCur->pszDesc);
2803}
2804
2805
2806/**
2807 * Dump the page directory to the log.
2808 *
2809 * @param pVM The cross context VM structure.
2810 * @param pHlp The info helpers.
2811 * @param pszArgs Arguments, ignored.
2812 */
2813static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2814{
2815 /** @todo SMP support!! */
2816 PVMCPU pVCpu = &pVM->aCpus[0];
2817
2818/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
2819 /* Big pages supported? */
2820 const bool fPSE = !!(CPUMGetGuestCR4(pVCpu) & X86_CR4_PSE);
2821
2822 /* Global pages supported? */
2823 const bool fPGE = !!(CPUMGetGuestCR4(pVCpu) & X86_CR4_PGE);
2824
2825 NOREF(pszArgs);
2826
2827 /*
2828 * Get page directory addresses.
2829 */
2830 pgmLock(pVM);
2831 PX86PD pPDSrc = pgmGstGet32bitPDPtr(pVCpu);
2832 Assert(pPDSrc);
2833
2834 /*
2835 * Iterate the page directory.
2836 */
2837 for (unsigned iPD = 0; iPD < RT_ELEMENTS(pPDSrc->a); iPD++)
2838 {
2839 X86PDE PdeSrc = pPDSrc->a[iPD];
2840 if (PdeSrc.n.u1Present)
2841 {
2842 if (PdeSrc.b.u1Size && fPSE)
2843 pHlp->pfnPrintf(pHlp,
2844 "%04X - %RGp P=%d U=%d RW=%d G=%d - BIG\n",
2845 iPD,
2846 pgmGstGet4MBPhysPage(pVM, PdeSrc),
2847 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
2848 else
2849 pHlp->pfnPrintf(pHlp,
2850 "%04X - %RGp P=%d U=%d RW=%d [G=%d]\n",
2851 iPD,
2852 (RTGCPHYS)(PdeSrc.u & X86_PDE_PG_MASK),
2853 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
2854 }
2855 }
2856 pgmUnlock(pVM);
2857}
2858
2859
2860/**
2861 * Service a VMMCALLRING3_PGM_LOCK call.
2862 *
2863 * @returns VBox status code.
2864 * @param pVM The cross context VM structure.
2865 */
2866VMMR3DECL(int) PGMR3LockCall(PVM pVM)
2867{
2868 int rc = PDMR3CritSectEnterEx(&pVM->pgm.s.CritSectX, true /* fHostCall */);
2869 AssertRC(rc);
2870 return rc;
2871}
2872
2873
2874/**
2875 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
2876 *
2877 * @returns PGM_TYPE_*.
2878 * @param pgmMode The mode value to convert.
2879 */
2880DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
2881{
2882 switch (pgmMode)
2883 {
2884 case PGMMODE_REAL: return PGM_TYPE_REAL;
2885 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
2886 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
2887 case PGMMODE_PAE:
2888 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
2889 case PGMMODE_AMD64:
2890 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
2891 case PGMMODE_NESTED: return PGM_TYPE_NESTED;
2892 case PGMMODE_EPT: return PGM_TYPE_EPT;
2893 default:
2894 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
2895 }
2896}
2897
2898
2899/**
2900 * Gets the index into the paging mode data array of a SHW+GST mode.
2901 *
2902 * @returns PGM::paPagingData index.
2903 * @param uShwType The shadow paging mode type.
2904 * @param uGstType The guest paging mode type.
2905 */
2906DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
2907{
2908 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_MAX);
2909 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
2910 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_REAL + 1)
2911 + (uGstType - PGM_TYPE_REAL);
2912}
2913
2914
2915/**
2916 * Gets the index into the paging mode data array of a SHW+GST mode.
2917 *
2918 * @returns PGM::paPagingData index.
2919 * @param enmShw The shadow paging mode.
2920 * @param enmGst The guest paging mode.
2921 */
2922DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
2923{
2924 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
2925 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
2926 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
2927}
2928
2929
2930/**
2931 * Calculates the max data index.
2932 * @returns The number of entries in the paging data array.
2933 */
2934DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
2935{
2936 return pgmModeDataIndex(PGM_TYPE_MAX, PGM_TYPE_AMD64) + 1;
2937}
2938
2939
2940/**
2941 * Initializes the paging mode data kept in PGM::paModeData.
2942 *
2943 * @param pVM The cross context VM structure.
2944 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
2945 * This is used early in the init process to avoid trouble with PDM
2946 * not being initialized yet.
2947 */
2948static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
2949{
2950 PPGMMODEDATA pModeData;
2951 int rc;
2952
2953 /*
2954 * Allocate the array on the first call.
2955 */
2956 if (!pVM->pgm.s.paModeData)
2957 {
2958 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
2959 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
2960 }
2961
2962 /*
2963 * Initialize the array entries.
2964 */
2965 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
2966 pModeData->uShwType = PGM_TYPE_32BIT;
2967 pModeData->uGstType = PGM_TYPE_REAL;
2968 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2969 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2970 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2971
2972 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
2973 pModeData->uShwType = PGM_TYPE_32BIT;
2974 pModeData->uGstType = PGM_TYPE_PROT;
2975 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2976 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2977 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2978
2979 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
2980 pModeData->uShwType = PGM_TYPE_32BIT;
2981 pModeData->uGstType = PGM_TYPE_32BIT;
2982 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2983 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2984 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2985
2986 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
2987 pModeData->uShwType = PGM_TYPE_PAE;
2988 pModeData->uGstType = PGM_TYPE_REAL;
2989 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2990 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2991 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2992
2993 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
2994 pModeData->uShwType = PGM_TYPE_PAE;
2995 pModeData->uGstType = PGM_TYPE_PROT;
2996 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2997 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2998 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2999
3000 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
3001 pModeData->uShwType = PGM_TYPE_PAE;
3002 pModeData->uGstType = PGM_TYPE_32BIT;
3003 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3004 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3005 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3006
3007 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
3008 pModeData->uShwType = PGM_TYPE_PAE;
3009 pModeData->uGstType = PGM_TYPE_PAE;
3010 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3011 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3012 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3013
3014#ifdef VBOX_WITH_64_BITS_GUESTS
3015 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
3016 pModeData->uShwType = PGM_TYPE_AMD64;
3017 pModeData->uGstType = PGM_TYPE_AMD64;
3018 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3019 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3020 rc = PGM_BTH_NAME_AMD64_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3021#endif
3022
3023 /* The nested paging mode. */
3024 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_REAL)];
3025 pModeData->uShwType = PGM_TYPE_NESTED;
3026 pModeData->uGstType = PGM_TYPE_REAL;
3027 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3028 rc = PGM_BTH_NAME_NESTED_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3029
3030 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGMMODE_PROTECTED)];
3031 pModeData->uShwType = PGM_TYPE_NESTED;
3032 pModeData->uGstType = PGM_TYPE_PROT;
3033 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3034 rc = PGM_BTH_NAME_NESTED_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3035
3036 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_32BIT)];
3037 pModeData->uShwType = PGM_TYPE_NESTED;
3038 pModeData->uGstType = PGM_TYPE_32BIT;
3039 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3040 rc = PGM_BTH_NAME_NESTED_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3041
3042 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_PAE)];
3043 pModeData->uShwType = PGM_TYPE_NESTED;
3044 pModeData->uGstType = PGM_TYPE_PAE;
3045 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3046 rc = PGM_BTH_NAME_NESTED_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3047
3048#ifdef VBOX_WITH_64_BITS_GUESTS
3049 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3050 pModeData->uShwType = PGM_TYPE_NESTED;
3051 pModeData->uGstType = PGM_TYPE_AMD64;
3052 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3053 rc = PGM_BTH_NAME_NESTED_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3054#endif
3055
3056 /* The shadow part of the nested callback mode depends on the host paging mode (AMD-V only). */
3057 switch (pVM->pgm.s.enmHostMode)
3058 {
3059#if HC_ARCH_BITS == 32
3060 case SUPPAGINGMODE_32_BIT:
3061 case SUPPAGINGMODE_32_BIT_GLOBAL:
3062 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3063 {
3064 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3065 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3066 }
3067# ifdef VBOX_WITH_64_BITS_GUESTS
3068 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3069 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3070# endif
3071 break;
3072
3073 case SUPPAGINGMODE_PAE:
3074 case SUPPAGINGMODE_PAE_NX:
3075 case SUPPAGINGMODE_PAE_GLOBAL:
3076 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3077 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3078 {
3079 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3080 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3081 }
3082# ifdef VBOX_WITH_64_BITS_GUESTS
3083 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3084 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3085# endif
3086 break;
3087#endif /* HC_ARCH_BITS == 32 */
3088
3089#if HC_ARCH_BITS == 64 || defined(RT_OS_DARWIN)
3090 case SUPPAGINGMODE_AMD64:
3091 case SUPPAGINGMODE_AMD64_GLOBAL:
3092 case SUPPAGINGMODE_AMD64_NX:
3093 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3094# ifdef VBOX_WITH_64_BITS_GUESTS
3095 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_AMD64; i++)
3096# else
3097 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3098# endif
3099 {
3100 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3101 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3102 }
3103 break;
3104#endif /* HC_ARCH_BITS == 64 || RT_OS_DARWIN */
3105
3106 default:
3107 AssertFailed();
3108 break;
3109 }
3110
3111 /* Extended paging (EPT) / Intel VT-x */
3112 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_REAL)];
3113 pModeData->uShwType = PGM_TYPE_EPT;
3114 pModeData->uGstType = PGM_TYPE_REAL;
3115 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3116 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3117 rc = PGM_BTH_NAME_EPT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3118
3119 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PROT)];
3120 pModeData->uShwType = PGM_TYPE_EPT;
3121 pModeData->uGstType = PGM_TYPE_PROT;
3122 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3123 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3124 rc = PGM_BTH_NAME_EPT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3125
3126 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_32BIT)];
3127 pModeData->uShwType = PGM_TYPE_EPT;
3128 pModeData->uGstType = PGM_TYPE_32BIT;
3129 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3130 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3131 rc = PGM_BTH_NAME_EPT_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3132
3133 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PAE)];
3134 pModeData->uShwType = PGM_TYPE_EPT;
3135 pModeData->uGstType = PGM_TYPE_PAE;
3136 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3137 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3138 rc = PGM_BTH_NAME_EPT_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3139
3140#ifdef VBOX_WITH_64_BITS_GUESTS
3141 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_AMD64)];
3142 pModeData->uShwType = PGM_TYPE_EPT;
3143 pModeData->uGstType = PGM_TYPE_AMD64;
3144 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3145 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3146 rc = PGM_BTH_NAME_EPT_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3147#endif
3148 return VINF_SUCCESS;
3149}
3150
3151
3152/**
3153 * Switch to different (or relocated in the relocate case) mode data.
3154 *
3155 * @param pVM The cross context VM structure.
3156 * @param pVCpu The cross context virtual CPU structure.
3157 * @param enmShw The shadow paging mode.
3158 * @param enmGst The guest paging mode.
3159 */
3160static void pgmR3ModeDataSwitch(PVM pVM, PVMCPU pVCpu, PGMMODE enmShw, PGMMODE enmGst)
3161{
3162 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndexByMode(enmShw, enmGst)];
3163
3164 Assert(pModeData->uGstType == pgmModeToType(enmGst));
3165 Assert(pModeData->uShwType == pgmModeToType(enmShw));
3166
3167 /* shadow */
3168 pVCpu->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
3169 pVCpu->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
3170 pVCpu->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
3171 Assert(pVCpu->pgm.s.pfnR3ShwGetPage);
3172 pVCpu->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
3173
3174 pVCpu->pgm.s.pfnRCShwGetPage = pModeData->pfnRCShwGetPage;
3175 pVCpu->pgm.s.pfnRCShwModifyPage = pModeData->pfnRCShwModifyPage;
3176
3177 pVCpu->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
3178 pVCpu->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
3179
3180
3181 /* guest */
3182 pVCpu->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
3183 pVCpu->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
3184 pVCpu->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
3185 Assert(pVCpu->pgm.s.pfnR3GstGetPage);
3186 pVCpu->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
3187 pVCpu->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
3188 pVCpu->pgm.s.pfnRCGstGetPage = pModeData->pfnRCGstGetPage;
3189 pVCpu->pgm.s.pfnRCGstModifyPage = pModeData->pfnRCGstModifyPage;
3190 pVCpu->pgm.s.pfnRCGstGetPDE = pModeData->pfnRCGstGetPDE;
3191 pVCpu->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
3192 pVCpu->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
3193 pVCpu->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
3194
3195 /* both */
3196 pVCpu->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
3197 pVCpu->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
3198 pVCpu->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
3199 Assert(pVCpu->pgm.s.pfnR3BthSyncCR3);
3200 pVCpu->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
3201 pVCpu->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
3202#ifdef VBOX_STRICT
3203 pVCpu->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
3204#endif
3205 pVCpu->pgm.s.pfnR3BthMapCR3 = pModeData->pfnR3BthMapCR3;
3206 pVCpu->pgm.s.pfnR3BthUnmapCR3 = pModeData->pfnR3BthUnmapCR3;
3207
3208 pVCpu->pgm.s.pfnRCBthTrap0eHandler = pModeData->pfnRCBthTrap0eHandler;
3209 pVCpu->pgm.s.pfnRCBthInvalidatePage = pModeData->pfnRCBthInvalidatePage;
3210 pVCpu->pgm.s.pfnRCBthSyncCR3 = pModeData->pfnRCBthSyncCR3;
3211 pVCpu->pgm.s.pfnRCBthPrefetchPage = pModeData->pfnRCBthPrefetchPage;
3212 pVCpu->pgm.s.pfnRCBthVerifyAccessSyncPage = pModeData->pfnRCBthVerifyAccessSyncPage;
3213#ifdef VBOX_STRICT
3214 pVCpu->pgm.s.pfnRCBthAssertCR3 = pModeData->pfnRCBthAssertCR3;
3215#endif
3216 pVCpu->pgm.s.pfnRCBthMapCR3 = pModeData->pfnRCBthMapCR3;
3217 pVCpu->pgm.s.pfnRCBthUnmapCR3 = pModeData->pfnRCBthUnmapCR3;
3218
3219 pVCpu->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
3220 pVCpu->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
3221 pVCpu->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
3222 pVCpu->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
3223 pVCpu->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
3224#ifdef VBOX_STRICT
3225 pVCpu->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
3226#endif
3227 pVCpu->pgm.s.pfnR0BthMapCR3 = pModeData->pfnR0BthMapCR3;
3228 pVCpu->pgm.s.pfnR0BthUnmapCR3 = pModeData->pfnR0BthUnmapCR3;
3229}
3230
3231
3232/**
3233 * Calculates the shadow paging mode.
3234 *
3235 * @returns The shadow paging mode.
3236 * @param pVM The cross context VM structure.
3237 * @param enmGuestMode The guest mode.
3238 * @param enmHostMode The host mode.
3239 * @param enmShadowMode The current shadow mode.
3240 * @param penmSwitcher Where to store the switcher to use.
3241 * VMMSWITCHER_INVALID means no change.
3242 */
3243static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
3244{
3245 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
3246 switch (enmGuestMode)
3247 {
3248 /*
3249 * When switching to real or protected mode we don't change
3250 * anything since it's likely that we'll switch back pretty soon.
3251 *
3252 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
3253 * and is supposed to determine which shadow paging and switcher to
3254 * use during init.
3255 */
3256 case PGMMODE_REAL:
3257 case PGMMODE_PROTECTED:
3258 if ( enmShadowMode != PGMMODE_INVALID
3259 && !HMIsEnabled(pVM) /* always switch in hm mode! */)
3260 break; /* (no change) */
3261
3262 switch (enmHostMode)
3263 {
3264 case SUPPAGINGMODE_32_BIT:
3265 case SUPPAGINGMODE_32_BIT_GLOBAL:
3266 enmShadowMode = PGMMODE_32_BIT;
3267 enmSwitcher = VMMSWITCHER_32_TO_32;
3268 break;
3269
3270 case SUPPAGINGMODE_PAE:
3271 case SUPPAGINGMODE_PAE_NX:
3272 case SUPPAGINGMODE_PAE_GLOBAL:
3273 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3274 enmShadowMode = PGMMODE_PAE;
3275 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3276#ifdef DEBUG_bird
3277 if (RTEnvExist("VBOX_32BIT"))
3278 {
3279 enmShadowMode = PGMMODE_32_BIT;
3280 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3281 }
3282#endif
3283 break;
3284
3285 case SUPPAGINGMODE_AMD64:
3286 case SUPPAGINGMODE_AMD64_GLOBAL:
3287 case SUPPAGINGMODE_AMD64_NX:
3288 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3289 enmShadowMode = PGMMODE_PAE;
3290 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3291#ifdef DEBUG_bird
3292 if (RTEnvExist("VBOX_32BIT"))
3293 {
3294 enmShadowMode = PGMMODE_32_BIT;
3295 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3296 }
3297#endif
3298 break;
3299
3300 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3301 }
3302 break;
3303
3304 case PGMMODE_32_BIT:
3305 switch (enmHostMode)
3306 {
3307 case SUPPAGINGMODE_32_BIT:
3308 case SUPPAGINGMODE_32_BIT_GLOBAL:
3309 enmShadowMode = PGMMODE_32_BIT;
3310 enmSwitcher = VMMSWITCHER_32_TO_32;
3311 break;
3312
3313 case SUPPAGINGMODE_PAE:
3314 case SUPPAGINGMODE_PAE_NX:
3315 case SUPPAGINGMODE_PAE_GLOBAL:
3316 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3317 enmShadowMode = PGMMODE_PAE;
3318 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3319#ifdef DEBUG_bird
3320 if (RTEnvExist("VBOX_32BIT"))
3321 {
3322 enmShadowMode = PGMMODE_32_BIT;
3323 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3324 }
3325#endif
3326 break;
3327
3328 case SUPPAGINGMODE_AMD64:
3329 case SUPPAGINGMODE_AMD64_GLOBAL:
3330 case SUPPAGINGMODE_AMD64_NX:
3331 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3332 enmShadowMode = PGMMODE_PAE;
3333 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3334#ifdef DEBUG_bird
3335 if (RTEnvExist("VBOX_32BIT"))
3336 {
3337 enmShadowMode = PGMMODE_32_BIT;
3338 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3339 }
3340#endif
3341 break;
3342
3343 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3344 }
3345 break;
3346
3347 case PGMMODE_PAE:
3348 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
3349 switch (enmHostMode)
3350 {
3351 case SUPPAGINGMODE_32_BIT:
3352 case SUPPAGINGMODE_32_BIT_GLOBAL:
3353 enmShadowMode = PGMMODE_PAE;
3354 enmSwitcher = VMMSWITCHER_32_TO_PAE;
3355 break;
3356
3357 case SUPPAGINGMODE_PAE:
3358 case SUPPAGINGMODE_PAE_NX:
3359 case SUPPAGINGMODE_PAE_GLOBAL:
3360 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3361 enmShadowMode = PGMMODE_PAE;
3362 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3363 break;
3364
3365 case SUPPAGINGMODE_AMD64:
3366 case SUPPAGINGMODE_AMD64_GLOBAL:
3367 case SUPPAGINGMODE_AMD64_NX:
3368 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3369 enmShadowMode = PGMMODE_PAE;
3370 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3371 break;
3372
3373 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3374 }
3375 break;
3376
3377 case PGMMODE_AMD64:
3378 case PGMMODE_AMD64_NX:
3379 switch (enmHostMode)
3380 {
3381 case SUPPAGINGMODE_32_BIT:
3382 case SUPPAGINGMODE_32_BIT_GLOBAL:
3383 enmShadowMode = PGMMODE_AMD64;
3384 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
3385 break;
3386
3387 case SUPPAGINGMODE_PAE:
3388 case SUPPAGINGMODE_PAE_NX:
3389 case SUPPAGINGMODE_PAE_GLOBAL:
3390 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3391 enmShadowMode = PGMMODE_AMD64;
3392 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
3393 break;
3394
3395 case SUPPAGINGMODE_AMD64:
3396 case SUPPAGINGMODE_AMD64_GLOBAL:
3397 case SUPPAGINGMODE_AMD64_NX:
3398 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3399 enmShadowMode = PGMMODE_AMD64;
3400 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
3401 break;
3402
3403 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3404 }
3405 break;
3406
3407
3408 default:
3409 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3410 *penmSwitcher = VMMSWITCHER_INVALID;
3411 return PGMMODE_INVALID;
3412 }
3413 /* Override the shadow mode is nested paging is active. */
3414 pVM->pgm.s.fNestedPaging = HMIsNestedPagingActive(pVM);
3415 if (pVM->pgm.s.fNestedPaging)
3416 enmShadowMode = HMGetShwPagingMode(pVM);
3417
3418 *penmSwitcher = enmSwitcher;
3419 return enmShadowMode;
3420}
3421
3422
3423/**
3424 * Performs the actual mode change.
3425 * This is called by PGMChangeMode and pgmR3InitPaging().
3426 *
3427 * @returns VBox status code. May suspend or power off the VM on error, but this
3428 * will trigger using FFs and not status codes.
3429 *
3430 * @param pVM The cross context VM structure.
3431 * @param pVCpu The cross context virtual CPU structure.
3432 * @param enmGuestMode The new guest mode. This is assumed to be different from
3433 * the current mode.
3434 */
3435VMMR3DECL(int) PGMR3ChangeMode(PVM pVM, PVMCPU pVCpu, PGMMODE enmGuestMode)
3436{
3437#if HC_ARCH_BITS == 32
3438 bool fIsOldGuestPagingMode64Bits = (pVCpu->pgm.s.enmGuestMode >= PGMMODE_AMD64);
3439#endif
3440 bool fIsNewGuestPagingMode64Bits = (enmGuestMode >= PGMMODE_AMD64);
3441
3442 Log(("PGMR3ChangeMode: Guest mode: %s -> %s\n", PGMGetModeName(pVCpu->pgm.s.enmGuestMode), PGMGetModeName(enmGuestMode)));
3443 STAM_REL_COUNTER_INC(&pVCpu->pgm.s.cGuestModeChanges);
3444
3445 /*
3446 * Calc the shadow mode and switcher.
3447 */
3448 VMMSWITCHER enmSwitcher;
3449 PGMMODE enmShadowMode;
3450 enmShadowMode = pgmR3CalcShadowMode(pVM, enmGuestMode, pVM->pgm.s.enmHostMode, pVCpu->pgm.s.enmShadowMode, &enmSwitcher);
3451
3452#ifdef VBOX_WITH_RAW_MODE
3453 if ( enmSwitcher != VMMSWITCHER_INVALID
3454 && !HMIsEnabled(pVM))
3455 {
3456 /*
3457 * Select new switcher.
3458 */
3459 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
3460 if (RT_FAILURE(rc))
3461 {
3462 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Rrc\n", enmSwitcher, rc));
3463 return rc;
3464 }
3465 }
3466#endif
3467
3468 /*
3469 * Exit old mode(s).
3470 */
3471#if HC_ARCH_BITS == 32
3472 /* The nested shadow paging mode for AMD-V does change when running 64 bits guests on 32 bits hosts; typically PAE <-> AMD64 */
3473 const bool fForceShwEnterExit = ( fIsOldGuestPagingMode64Bits != fIsNewGuestPagingMode64Bits
3474 && enmShadowMode == PGMMODE_NESTED);
3475#else
3476 const bool fForceShwEnterExit = false;
3477#endif
3478 /* shadow */
3479 if ( enmShadowMode != pVCpu->pgm.s.enmShadowMode
3480 || fForceShwEnterExit)
3481 {
3482 LogFlow(("PGMR3ChangeMode: Shadow mode: %s -> %s\n", PGMGetModeName(pVCpu->pgm.s.enmShadowMode), PGMGetModeName(enmShadowMode)));
3483 if (PGM_SHW_PFN(Exit, pVCpu))
3484 {
3485 int rc = PGM_SHW_PFN(Exit, pVCpu)(pVCpu);
3486 if (RT_FAILURE(rc))
3487 {
3488 AssertMsgFailed(("Exit failed for shadow mode %d: %Rrc\n", pVCpu->pgm.s.enmShadowMode, rc));
3489 return rc;
3490 }
3491 }
3492
3493 }
3494 else
3495 LogFlow(("PGMR3ChangeMode: Shadow mode remains: %s\n", PGMGetModeName(pVCpu->pgm.s.enmShadowMode)));
3496
3497 /* guest */
3498 if (PGM_GST_PFN(Exit, pVCpu))
3499 {
3500 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
3501 if (RT_FAILURE(rc))
3502 {
3503 AssertMsgFailed(("Exit failed for guest mode %d: %Rrc\n", pVCpu->pgm.s.enmGuestMode, rc));
3504 return rc;
3505 }
3506 }
3507
3508 /*
3509 * Load new paging mode data.
3510 */
3511 pgmR3ModeDataSwitch(pVM, pVCpu, enmShadowMode, enmGuestMode);
3512
3513 /*
3514 * Enter new shadow mode (if changed).
3515 */
3516 if ( enmShadowMode != pVCpu->pgm.s.enmShadowMode
3517 || fForceShwEnterExit)
3518 {
3519 int rc;
3520 pVCpu->pgm.s.enmShadowMode = enmShadowMode;
3521 switch (enmShadowMode)
3522 {
3523 case PGMMODE_32_BIT:
3524 rc = PGM_SHW_NAME_32BIT(Enter)(pVCpu, false);
3525 break;
3526 case PGMMODE_PAE:
3527 case PGMMODE_PAE_NX:
3528 rc = PGM_SHW_NAME_PAE(Enter)(pVCpu, false);
3529 break;
3530 case PGMMODE_AMD64:
3531 case PGMMODE_AMD64_NX:
3532 rc = PGM_SHW_NAME_AMD64(Enter)(pVCpu, fIsNewGuestPagingMode64Bits);
3533 break;
3534 case PGMMODE_NESTED:
3535 rc = PGM_SHW_NAME_NESTED(Enter)(pVCpu, fIsNewGuestPagingMode64Bits);
3536 break;
3537 case PGMMODE_EPT:
3538 rc = PGM_SHW_NAME_EPT(Enter)(pVCpu, fIsNewGuestPagingMode64Bits);
3539 break;
3540 case PGMMODE_REAL:
3541 case PGMMODE_PROTECTED:
3542 default:
3543 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
3544 return VERR_INTERNAL_ERROR;
3545 }
3546 if (RT_FAILURE(rc))
3547 {
3548 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Rrc\n", enmShadowMode, rc));
3549 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
3550 return rc;
3551 }
3552 }
3553
3554 /*
3555 * Always flag the necessary updates
3556 */
3557 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3558
3559 /*
3560 * Enter the new guest and shadow+guest modes.
3561 */
3562 int rc = -1;
3563 int rc2 = -1;
3564 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
3565 pVCpu->pgm.s.enmGuestMode = enmGuestMode;
3566 switch (enmGuestMode)
3567 {
3568 case PGMMODE_REAL:
3569 rc = PGM_GST_NAME_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3570 switch (pVCpu->pgm.s.enmShadowMode)
3571 {
3572 case PGMMODE_32_BIT:
3573 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3574 break;
3575 case PGMMODE_PAE:
3576 case PGMMODE_PAE_NX:
3577 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3578 break;
3579 case PGMMODE_NESTED:
3580 rc2 = PGM_BTH_NAME_NESTED_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3581 break;
3582 case PGMMODE_EPT:
3583 rc2 = PGM_BTH_NAME_EPT_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3584 break;
3585 case PGMMODE_AMD64:
3586 case PGMMODE_AMD64_NX:
3587 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3588 default: AssertFailed(); break;
3589 }
3590 break;
3591
3592 case PGMMODE_PROTECTED:
3593 rc = PGM_GST_NAME_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3594 switch (pVCpu->pgm.s.enmShadowMode)
3595 {
3596 case PGMMODE_32_BIT:
3597 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3598 break;
3599 case PGMMODE_PAE:
3600 case PGMMODE_PAE_NX:
3601 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3602 break;
3603 case PGMMODE_NESTED:
3604 rc2 = PGM_BTH_NAME_NESTED_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3605 break;
3606 case PGMMODE_EPT:
3607 rc2 = PGM_BTH_NAME_EPT_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3608 break;
3609 case PGMMODE_AMD64:
3610 case PGMMODE_AMD64_NX:
3611 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3612 default: AssertFailed(); break;
3613 }
3614 break;
3615
3616 case PGMMODE_32_BIT:
3617 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & X86_CR3_PAGE_MASK;
3618 rc = PGM_GST_NAME_32BIT(Enter)(pVCpu, GCPhysCR3);
3619 switch (pVCpu->pgm.s.enmShadowMode)
3620 {
3621 case PGMMODE_32_BIT:
3622 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVCpu, GCPhysCR3);
3623 break;
3624 case PGMMODE_PAE:
3625 case PGMMODE_PAE_NX:
3626 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVCpu, GCPhysCR3);
3627 break;
3628 case PGMMODE_NESTED:
3629 rc2 = PGM_BTH_NAME_NESTED_32BIT(Enter)(pVCpu, GCPhysCR3);
3630 break;
3631 case PGMMODE_EPT:
3632 rc2 = PGM_BTH_NAME_EPT_32BIT(Enter)(pVCpu, GCPhysCR3);
3633 break;
3634 case PGMMODE_AMD64:
3635 case PGMMODE_AMD64_NX:
3636 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3637 default: AssertFailed(); break;
3638 }
3639 break;
3640
3641 case PGMMODE_PAE_NX:
3642 case PGMMODE_PAE:
3643 {
3644 uint32_t u32Dummy, u32Features;
3645
3646 CPUMGetGuestCpuId(pVCpu, 1, 0, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
3647 if (!(u32Features & X86_CPUID_FEATURE_EDX_PAE))
3648 return VMSetRuntimeError(pVM, VMSETRTERR_FLAGS_FATAL, "PAEmode",
3649 N_("The guest is trying to switch to the PAE mode which is currently disabled by default in VirtualBox. PAE support can be enabled using the VM settings (System/Processor)"));
3650
3651 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & X86_CR3_PAE_PAGE_MASK;
3652 rc = PGM_GST_NAME_PAE(Enter)(pVCpu, GCPhysCR3);
3653 switch (pVCpu->pgm.s.enmShadowMode)
3654 {
3655 case PGMMODE_PAE:
3656 case PGMMODE_PAE_NX:
3657 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVCpu, GCPhysCR3);
3658 break;
3659 case PGMMODE_NESTED:
3660 rc2 = PGM_BTH_NAME_NESTED_PAE(Enter)(pVCpu, GCPhysCR3);
3661 break;
3662 case PGMMODE_EPT:
3663 rc2 = PGM_BTH_NAME_EPT_PAE(Enter)(pVCpu, GCPhysCR3);
3664 break;
3665 case PGMMODE_32_BIT:
3666 case PGMMODE_AMD64:
3667 case PGMMODE_AMD64_NX:
3668 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3669 default: AssertFailed(); break;
3670 }
3671 break;
3672 }
3673
3674#ifdef VBOX_WITH_64_BITS_GUESTS
3675 case PGMMODE_AMD64_NX:
3676 case PGMMODE_AMD64:
3677 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & UINT64_C(0xfffffffffffff000); /** @todo define this mask! */
3678 rc = PGM_GST_NAME_AMD64(Enter)(pVCpu, GCPhysCR3);
3679 switch (pVCpu->pgm.s.enmShadowMode)
3680 {
3681 case PGMMODE_AMD64:
3682 case PGMMODE_AMD64_NX:
3683 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVCpu, GCPhysCR3);
3684 break;
3685 case PGMMODE_NESTED:
3686 rc2 = PGM_BTH_NAME_NESTED_AMD64(Enter)(pVCpu, GCPhysCR3);
3687 break;
3688 case PGMMODE_EPT:
3689 rc2 = PGM_BTH_NAME_EPT_AMD64(Enter)(pVCpu, GCPhysCR3);
3690 break;
3691 case PGMMODE_32_BIT:
3692 case PGMMODE_PAE:
3693 case PGMMODE_PAE_NX:
3694 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
3695 default: AssertFailed(); break;
3696 }
3697 break;
3698#endif
3699
3700 default:
3701 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3702 rc = VERR_NOT_IMPLEMENTED;
3703 break;
3704 }
3705
3706 /* status codes. */
3707 AssertRC(rc);
3708 AssertRC(rc2);
3709 if (RT_SUCCESS(rc))
3710 {
3711 rc = rc2;
3712 if (RT_SUCCESS(rc)) /* no informational status codes. */
3713 rc = VINF_SUCCESS;
3714 }
3715
3716 /* Notify HM as well. */
3717 HMR3PagingModeChanged(pVM, pVCpu, pVCpu->pgm.s.enmShadowMode, pVCpu->pgm.s.enmGuestMode);
3718 return rc;
3719}
3720
3721
3722/**
3723 * Called by pgmPoolFlushAllInt prior to flushing the pool.
3724 *
3725 * @returns VBox status code, fully asserted.
3726 * @param pVCpu The cross context virtual CPU structure.
3727 */
3728int pgmR3ExitShadowModeBeforePoolFlush(PVMCPU pVCpu)
3729{
3730 /* Unmap the old CR3 value before flushing everything. */
3731 int rc = PGM_BTH_PFN(UnmapCR3, pVCpu)(pVCpu);
3732 AssertRC(rc);
3733
3734 /* Exit the current shadow paging mode as well; nested paging and EPT use a root CR3 which will get flushed here. */
3735 rc = PGM_SHW_PFN(Exit, pVCpu)(pVCpu);
3736 AssertRC(rc);
3737 Assert(pVCpu->pgm.s.pShwPageCR3R3 == NULL);
3738 return rc;
3739}
3740
3741
3742/**
3743 * Called by pgmPoolFlushAllInt after flushing the pool.
3744 *
3745 * @returns VBox status code, fully asserted.
3746 * @param pVM The cross context VM structure.
3747 * @param pVCpu The cross context virtual CPU structure.
3748 */
3749int pgmR3ReEnterShadowModeAfterPoolFlush(PVM pVM, PVMCPU pVCpu)
3750{
3751 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
3752 int rc = PGMR3ChangeMode(pVM, pVCpu, PGMGetGuestMode(pVCpu));
3753 Assert(VMCPU_FF_IS_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3));
3754 AssertRCReturn(rc, rc);
3755 AssertRCSuccessReturn(rc, VERR_IPE_UNEXPECTED_INFO_STATUS);
3756
3757 Assert(pVCpu->pgm.s.pShwPageCR3R3 != NULL);
3758 AssertMsg( pVCpu->pgm.s.enmShadowMode >= PGMMODE_NESTED
3759 || CPUMGetHyperCR3(pVCpu) == PGMGetHyperCR3(pVCpu),
3760 ("%RHp != %RHp %s\n", (RTHCPHYS)CPUMGetHyperCR3(pVCpu), PGMGetHyperCR3(pVCpu), PGMGetModeName(pVCpu->pgm.s.enmShadowMode)));
3761 return rc;
3762}
3763
3764
3765/**
3766 * Called by PGMR3PhysSetA20 after changing the A20 state.
3767 *
3768 * @param pVCpu The cross context virtual CPU structure.
3769 */
3770void pgmR3RefreshShadowModeAfterA20Change(PVMCPU pVCpu)
3771{
3772 /** @todo Probably doing a bit too much here. */
3773 int rc = pgmR3ExitShadowModeBeforePoolFlush(pVCpu);
3774 AssertReleaseRC(rc);
3775 rc = pgmR3ReEnterShadowModeAfterPoolFlush(pVCpu->CTX_SUFF(pVM), pVCpu);
3776 AssertReleaseRC(rc);
3777}
3778
3779
3780#ifdef VBOX_WITH_DEBUGGER
3781
3782/**
3783 * @callback_method_impl{FNDBGCCMD, The '.pgmerror' and '.pgmerroroff' commands.}
3784 */
3785static DECLCALLBACK(int) pgmR3CmdError(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3786{
3787 /*
3788 * Validate input.
3789 */
3790 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3791 PVM pVM = pUVM->pVM;
3792 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 0, cArgs == 0 || (cArgs == 1 && paArgs[0].enmType == DBGCVAR_TYPE_STRING));
3793
3794 if (!cArgs)
3795 {
3796 /*
3797 * Print the list of error injection locations with status.
3798 */
3799 DBGCCmdHlpPrintf(pCmdHlp, "PGM error inject locations:\n");
3800 DBGCCmdHlpPrintf(pCmdHlp, " handy - %RTbool\n", pVM->pgm.s.fErrInjHandyPages);
3801 }
3802 else
3803 {
3804 /*
3805 * String switch on where to inject the error.
3806 */
3807 bool const fNewState = !strcmp(pCmd->pszCmd, "pgmerror");
3808 const char *pszWhere = paArgs[0].u.pszString;
3809 if (!strcmp(pszWhere, "handy"))
3810 ASMAtomicWriteBool(&pVM->pgm.s.fErrInjHandyPages, fNewState);
3811 else
3812 return DBGCCmdHlpPrintf(pCmdHlp, "error: Invalid 'where' value: %s.\n", pszWhere);
3813 DBGCCmdHlpPrintf(pCmdHlp, "done\n");
3814 }
3815 return VINF_SUCCESS;
3816}
3817
3818
3819/**
3820 * @callback_method_impl{FNDBGCCMD, The '.pgmsync' command.}
3821 */
3822static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3823{
3824 /*
3825 * Validate input.
3826 */
3827 NOREF(pCmd); NOREF(paArgs); NOREF(cArgs);
3828 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3829 PVMCPU pVCpu = VMMR3GetCpuByIdU(pUVM, DBGCCmdHlpGetCurrentCpu(pCmdHlp));
3830 if (!pVCpu)
3831 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Invalid CPU ID");
3832
3833 /*
3834 * Force page directory sync.
3835 */
3836 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3837
3838 int rc = DBGCCmdHlpPrintf(pCmdHlp, "Forcing page directory sync.\n");
3839 if (RT_FAILURE(rc))
3840 return rc;
3841
3842 return VINF_SUCCESS;
3843}
3844
3845#ifdef VBOX_STRICT
3846
3847/**
3848 * EMT callback for pgmR3CmdAssertCR3.
3849 *
3850 * @returns VBox status code.
3851 * @param pUVM The user mode VM handle.
3852 * @param pcErrors Where to return the error count.
3853 */
3854static DECLCALLBACK(int) pgmR3CmdAssertCR3EmtWorker(PUVM pUVM, unsigned *pcErrors)
3855{
3856 PVM pVM = pUVM->pVM;
3857 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
3858 PVMCPU pVCpu = VMMGetCpu(pVM);
3859
3860 *pcErrors = PGMAssertCR3(pVM, pVCpu, CPUMGetGuestCR3(pVCpu), CPUMGetGuestCR4(pVCpu));
3861
3862 return VINF_SUCCESS;
3863}
3864
3865
3866/**
3867 * @callback_method_impl{FNDBGCCMD, The '.pgmassertcr3' command.}
3868 */
3869static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3870{
3871 /*
3872 * Validate input.
3873 */
3874 NOREF(pCmd); NOREF(paArgs); NOREF(cArgs);
3875 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3876
3877 int rc = DBGCCmdHlpPrintf(pCmdHlp, "Checking shadow CR3 page tables for consistency.\n");
3878 if (RT_FAILURE(rc))
3879 return rc;
3880
3881 unsigned cErrors = 0;
3882 rc = VMR3ReqCallWaitU(pUVM, DBGCCmdHlpGetCurrentCpu(pCmdHlp), (PFNRT)pgmR3CmdAssertCR3EmtWorker, 2, pUVM, &cErrors);
3883 if (RT_FAILURE(rc))
3884 return DBGCCmdHlpFail(pCmdHlp, pCmd, "VMR3ReqCallWaitU failed: %Rrc", rc);
3885 if (cErrors > 0)
3886 return DBGCCmdHlpFail(pCmdHlp, pCmd, "PGMAssertCR3: %u error(s)", cErrors);
3887 return DBGCCmdHlpPrintf(pCmdHlp, "PGMAssertCR3: OK\n");
3888}
3889
3890#endif /* VBOX_STRICT */
3891
3892/**
3893 * @callback_method_impl{FNDBGCCMD, The '.pgmsyncalways' command.}
3894 */
3895static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3896{
3897 /*
3898 * Validate input.
3899 */
3900 NOREF(pCmd); NOREF(paArgs); NOREF(cArgs);
3901 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3902 PVMCPU pVCpu = VMMR3GetCpuByIdU(pUVM, DBGCCmdHlpGetCurrentCpu(pCmdHlp));
3903 if (!pVCpu)
3904 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Invalid CPU ID");
3905
3906 /*
3907 * Force page directory sync.
3908 */
3909 int rc;
3910 if (pVCpu->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
3911 {
3912 ASMAtomicAndU32(&pVCpu->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
3913 rc = DBGCCmdHlpPrintf(pCmdHlp, "Disabled permanent forced page directory syncing.\n");
3914 }
3915 else
3916 {
3917 ASMAtomicOrU32(&pVCpu->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
3918 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3919 rc = DBGCCmdHlpPrintf(pCmdHlp, "Enabled permanent forced page directory syncing.\n");
3920 }
3921 return rc;
3922}
3923
3924
3925/**
3926 * @callback_method_impl{FNDBGCCMD, The '.pgmphystofile' command.}
3927 */
3928static DECLCALLBACK(int) pgmR3CmdPhysToFile(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3929{
3930 /*
3931 * Validate input.
3932 */
3933 NOREF(pCmd);
3934 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3935 PVM pVM = pUVM->pVM;
3936 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 0, cArgs == 1 || cArgs == 2);
3937 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 0, paArgs[0].enmType == DBGCVAR_TYPE_STRING);
3938 if (cArgs == 2)
3939 {
3940 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 1, paArgs[1].enmType == DBGCVAR_TYPE_STRING);
3941 if (strcmp(paArgs[1].u.pszString, "nozero"))
3942 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Invalid 2nd argument '%s', must be 'nozero'.\n", paArgs[1].u.pszString);
3943 }
3944 bool fIncZeroPgs = cArgs < 2;
3945
3946 /*
3947 * Open the output file and get the ram parameters.
3948 */
3949 RTFILE hFile;
3950 int rc = RTFileOpen(&hFile, paArgs[0].u.pszString, RTFILE_O_WRITE | RTFILE_O_CREATE_REPLACE | RTFILE_O_DENY_WRITE);
3951 if (RT_FAILURE(rc))
3952 return DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileOpen(,'%s',) -> %Rrc.\n", paArgs[0].u.pszString, rc);
3953
3954 uint32_t cbRamHole = 0;
3955 CFGMR3QueryU32Def(CFGMR3GetRootU(pUVM), "RamHoleSize", &cbRamHole, MM_RAM_HOLE_SIZE_DEFAULT);
3956 uint64_t cbRam = 0;
3957 CFGMR3QueryU64Def(CFGMR3GetRootU(pUVM), "RamSize", &cbRam, 0);
3958 RTGCPHYS GCPhysEnd = cbRam + cbRamHole;
3959
3960 /*
3961 * Dump the physical memory, page by page.
3962 */
3963 RTGCPHYS GCPhys = 0;
3964 char abZeroPg[PAGE_SIZE];
3965 RT_ZERO(abZeroPg);
3966
3967 pgmLock(pVM);
3968 for (PPGMRAMRANGE pRam = pVM->pgm.s.pRamRangesXR3;
3969 pRam && pRam->GCPhys < GCPhysEnd && RT_SUCCESS(rc);
3970 pRam = pRam->pNextR3)
3971 {
3972 /* fill the gap */
3973 if (pRam->GCPhys > GCPhys && fIncZeroPgs)
3974 {
3975 while (pRam->GCPhys > GCPhys && RT_SUCCESS(rc))
3976 {
3977 rc = RTFileWrite(hFile, abZeroPg, PAGE_SIZE, NULL);
3978 GCPhys += PAGE_SIZE;
3979 }
3980 }
3981
3982 PCPGMPAGE pPage = &pRam->aPages[0];
3983 while (GCPhys < pRam->GCPhysLast && RT_SUCCESS(rc))
3984 {
3985 if ( PGM_PAGE_IS_ZERO(pPage)
3986 || PGM_PAGE_IS_BALLOONED(pPage))
3987 {
3988 if (fIncZeroPgs)
3989 {
3990 rc = RTFileWrite(hFile, abZeroPg, PAGE_SIZE, NULL);
3991 if (RT_FAILURE(rc))
3992 DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileWrite -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
3993 }
3994 }
3995 else
3996 {
3997 switch (PGM_PAGE_GET_TYPE(pPage))
3998 {
3999 case PGMPAGETYPE_RAM:
4000 case PGMPAGETYPE_ROM_SHADOW: /* trouble?? */
4001 case PGMPAGETYPE_ROM:
4002 case PGMPAGETYPE_MMIO2:
4003 {
4004 void const *pvPage;
4005 PGMPAGEMAPLOCK Lock;
4006 rc = PGMPhysGCPhys2CCPtrReadOnly(pVM, GCPhys, &pvPage, &Lock);
4007 if (RT_SUCCESS(rc))
4008 {
4009 rc = RTFileWrite(hFile, pvPage, PAGE_SIZE, NULL);
4010 PGMPhysReleasePageMappingLock(pVM, &Lock);
4011 if (RT_FAILURE(rc))
4012 DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileWrite -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
4013 }
4014 else
4015 DBGCCmdHlpPrintf(pCmdHlp, "error: PGMPhysGCPhys2CCPtrReadOnly -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
4016 break;
4017 }
4018
4019 default:
4020 AssertFailed();
4021 case PGMPAGETYPE_MMIO:
4022 case PGMPAGETYPE_MMIO2_ALIAS_MMIO:
4023 case PGMPAGETYPE_SPECIAL_ALIAS_MMIO:
4024 if (fIncZeroPgs)
4025 {
4026 rc = RTFileWrite(hFile, abZeroPg, PAGE_SIZE, NULL);
4027 if (RT_FAILURE(rc))
4028 DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileWrite -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
4029 }
4030 break;
4031 }
4032 }
4033
4034
4035 /* advance */
4036 GCPhys += PAGE_SIZE;
4037 pPage++;
4038 }
4039 }
4040 pgmUnlock(pVM);
4041
4042 RTFileClose(hFile);
4043 if (RT_SUCCESS(rc))
4044 return DBGCCmdHlpPrintf(pCmdHlp, "Successfully saved physical memory to '%s'.\n", paArgs[0].u.pszString);
4045 return VINF_SUCCESS;
4046}
4047
4048#endif /* VBOX_WITH_DEBUGGER */
4049
4050/**
4051 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
4052 */
4053typedef struct PGMCHECKINTARGS
4054{
4055 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
4056 PPGMPHYSHANDLER pPrevPhys;
4057#ifdef VBOX_WITH_RAW_MODE
4058 PPGMVIRTHANDLER pPrevVirt;
4059 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
4060#else
4061 void *pvFiller1, *pvFiller2;
4062#endif
4063 PVM pVM;
4064} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
4065
4066/**
4067 * Validate a node in the physical handler tree.
4068 *
4069 * @returns 0 on if ok, other wise 1.
4070 * @param pNode The handler node.
4071 * @param pvUser pVM.
4072 */
4073static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4074{
4075 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4076 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
4077 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4078 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,
4079 ("pCur=%p %RGp-%RGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4080 AssertReleaseMsg( !pArgs->pPrevPhys
4081 || ( pArgs->fLeftToRight
4082 ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key
4083 : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
4084 ("pPrevPhys=%p %RGp-%RGp %s\n"
4085 " pCur=%p %RGp-%RGp %s\n",
4086 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
4087 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4088 pArgs->pPrevPhys = pCur;
4089 return 0;
4090}
4091
4092#ifdef VBOX_WITH_RAW_MODE
4093
4094/**
4095 * Validate a node in the virtual handler tree.
4096 *
4097 * @returns 0 on if ok, other wise 1.
4098 * @param pNode The handler node.
4099 * @param pvUser pVM.
4100 */
4101static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
4102{
4103 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4104 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
4105 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4106 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGv-%RGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4107 AssertReleaseMsg( !pArgs->pPrevVirt
4108 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
4109 ("pPrevVirt=%p %RGv-%RGv %s\n"
4110 " pCur=%p %RGv-%RGv %s\n",
4111 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
4112 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4113 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
4114 {
4115 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
4116 ("pCur=%p %RGv-%RGv %s\n"
4117 "iPage=%d offVirtHandle=%#x expected %#x\n",
4118 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
4119 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
4120 }
4121 pArgs->pPrevVirt = pCur;
4122 return 0;
4123}
4124
4125
4126/**
4127 * Validate a node in the virtual handler tree.
4128 *
4129 * @returns 0 on if ok, other wise 1.
4130 * @param pNode The handler node.
4131 * @param pvUser pVM.
4132 */
4133static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4134{
4135 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4136 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
4137 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
4138 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
4139 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
4140 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4141 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4142 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4143 " pCur=%p %RGp-%RGp\n",
4144 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4145 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4146 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4147 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4148 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4149 " pCur=%p %RGp-%RGp\n",
4150 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4151 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4152 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
4153 ("pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4154 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4155 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
4156 {
4157 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
4158 for (;;)
4159 {
4160 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
4161 AssertReleaseMsg(pCur2 != pCur,
4162 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4163 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4164 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
4165 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4166 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4167 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4168 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4169 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
4170 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4171 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4172 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4173 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4174 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
4175 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4176 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4177 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4178 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4179 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
4180 break;
4181 }
4182 }
4183
4184 pArgs->pPrevPhys2Virt = pCur;
4185 return 0;
4186}
4187
4188#endif /* VBOX_WITH_RAW_MODE */
4189
4190/**
4191 * Perform an integrity check on the PGM component.
4192 *
4193 * @returns VINF_SUCCESS if everything is fine.
4194 * @returns VBox error status after asserting on integrity breach.
4195 * @param pVM The cross context VM structure.
4196 */
4197VMMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
4198{
4199 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
4200
4201 /*
4202 * Check the trees.
4203 */
4204 int cErrors = 0;
4205 const static PGMCHECKINTARGS s_LeftToRight = { true, NULL, NULL, NULL, pVM };
4206 const static PGMCHECKINTARGS s_RightToLeft = { false, NULL, NULL, NULL, pVM };
4207 PGMCHECKINTARGS Args = s_LeftToRight;
4208 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4209 Args = s_RightToLeft;
4210 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4211#ifdef VBOX_WITH_RAW_MODE
4212 Args = s_LeftToRight;
4213 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4214 Args = s_RightToLeft;
4215 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4216 Args = s_LeftToRight;
4217 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4218 Args = s_RightToLeft;
4219 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4220 Args = s_LeftToRight;
4221 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4222 Args = s_RightToLeft;
4223 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4224#endif /* VBOX_WITH_RAW_MODE */
4225
4226 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
4227}
4228
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette