VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/PGMPool.cpp@ 105745

Last change on this file since 105745 was 104840, checked in by vboxsync, 6 months ago

VMM/PGM: Refactored RAM ranges, MMIO2 ranges and ROM ranges and added MMIO ranges (to PGM) so we can safely access RAM ranges at runtime w/o fear of them ever being freed up. It is now only possible to create these during VM creation and loading, and they will live till VM destruction (except for MMIO2 which could be destroyed during loading (PCNet fun)). The lookup handling is by table instead of pointer tree. No more ring-0 pointers in shared data. bugref:10687 bugref:10093

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 74.4 KB
Line 
1/* $Id: PGMPool.cpp 104840 2024-06-05 00:59:51Z vboxsync $ */
2/** @file
3 * PGM Shadow Page Pool.
4 */
5
6/*
7 * Copyright (C) 2006-2023 Oracle and/or its affiliates.
8 *
9 * This file is part of VirtualBox base platform packages, as
10 * available from https://www.virtualbox.org.
11 *
12 * This program is free software; you can redistribute it and/or
13 * modify it under the terms of the GNU General Public License
14 * as published by the Free Software Foundation, in version 3 of the
15 * License.
16 *
17 * This program is distributed in the hope that it will be useful, but
18 * WITHOUT ANY WARRANTY; without even the implied warranty of
19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
20 * General Public License for more details.
21 *
22 * You should have received a copy of the GNU General Public License
23 * along with this program; if not, see <https://www.gnu.org/licenses>.
24 *
25 * SPDX-License-Identifier: GPL-3.0-only
26 */
27
28/** @page pg_pgm_pool PGM Shadow Page Pool
29 *
30 * Motivations:
31 * -# Relationship between shadow page tables and physical guest pages. This
32 * should allow us to skip most of the global flushes now following access
33 * handler changes. The main expense is flushing shadow pages.
34 * -# Limit the pool size if necessary (default is kind of limitless).
35 * -# Allocate shadow pages from RC. We use to only do this in SyncCR3.
36 * -# Required for 64-bit guests.
37 * -# Combining the PD cache and page pool in order to simplify caching.
38 *
39 *
40 * @section sec_pgm_pool_outline Design Outline
41 *
42 * The shadow page pool tracks pages used for shadowing paging structures (i.e.
43 * page tables, page directory, page directory pointer table and page map
44 * level-4). Each page in the pool has an unique identifier. This identifier is
45 * used to link a guest physical page to a shadow PT. The identifier is a
46 * non-zero value and has a relativly low max value - say 14 bits. This makes it
47 * possible to fit it into the upper bits of the of the aHCPhys entries in the
48 * ram range.
49 *
50 * By restricting host physical memory to the first 48 bits (which is the
51 * announced physical memory range of the K8L chip (scheduled for 2008)), we
52 * can safely use the upper 16 bits for shadow page ID and reference counting.
53 *
54 * Update: The 48 bit assumption will be lifted with the new physical memory
55 * management (PGMPAGE), so we won't have any trouble when someone stuffs 2TB
56 * into a box in some years.
57 *
58 * Now, it's possible for a page to be aliased, i.e. mapped by more than one PT
59 * or PD. This is solved by creating a list of physical cross reference extents
60 * when ever this happens. Each node in the list (extent) is can contain 3 page
61 * pool indexes. The list it self is chained using indexes into the paPhysExt
62 * array.
63 *
64 *
65 * @section sec_pgm_pool_life Life Cycle of a Shadow Page
66 *
67 * -# The SyncPT function requests a page from the pool.
68 * The request includes the kind of page it is (PT/PD, PAE/legacy), the
69 * address of the page it's shadowing, and more.
70 * -# The pool responds to the request by allocating a new page.
71 * When the cache is enabled, it will first check if it's in the cache.
72 * Should the pool be exhausted, one of two things can be done:
73 * -# Flush the whole pool and current CR3.
74 * -# Use the cache to find a page which can be flushed (~age).
75 * -# The SyncPT function will sync one or more pages and insert it into the
76 * shadow PD.
77 * -# The SyncPage function may sync more pages on a later \#PFs.
78 * -# The page is freed / flushed in SyncCR3 (perhaps) and some other cases.
79 * When caching is enabled, the page isn't flush but remains in the cache.
80 *
81 *
82 * @section sec_pgm_pool_monitoring Monitoring
83 *
84 * We always monitor GUEST_PAGE_SIZE chunks of memory. When we've got multiple
85 * shadow pages for the same GUEST_PAGE_SIZE of guest memory (PAE and mixed
86 * PD/PT) the pages sharing the monitor get linked using the
87 * iMonitoredNext/Prev. The head page is the pvUser to the access handlers.
88 *
89 *
90 * @section sec_pgm_pool_impl Implementation
91 *
92 * The pool will take pages from the MM page pool. The tracking data
93 * (attributes, bitmaps and so on) are allocated from the hypervisor heap. The
94 * pool content can be accessed both by using the page id and the physical
95 * address (HC). The former is managed by means of an array, the latter by an
96 * offset based AVL tree.
97 *
98 * Flushing of a pool page means that we iterate the content (we know what kind
99 * it is) and updates the link information in the ram range.
100 *
101 * ...
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_PGM_POOL
109#define VBOX_WITHOUT_PAGING_BIT_FIELDS /* 64-bit bitfields are just asking for trouble. See @bugref{9841} and others. */
110#include <VBox/vmm/pgm.h>
111#include <VBox/vmm/mm.h>
112#include "PGMInternal.h"
113#include <VBox/vmm/vmcc.h>
114#include <VBox/vmm/uvm.h>
115#include "PGMInline.h"
116
117#include <VBox/log.h>
118#include <VBox/err.h>
119#include <iprt/asm-mem.h>
120#include <iprt/string.h>
121#include <VBox/dbg.h>
122
123
124/*********************************************************************************************************************************
125* Structures and Typedefs *
126*********************************************************************************************************************************/
127typedef struct PGMPOOLCHECKERSTATE
128{
129 PDBGCCMDHLP pCmdHlp;
130 PVM pVM;
131 PPGMPOOL pPool;
132 PPGMPOOLPAGE pPage;
133 bool fFirstMsg;
134 uint32_t cErrors;
135} PGMPOOLCHECKERSTATE;
136typedef PGMPOOLCHECKERSTATE *PPGMPOOLCHECKERSTATE;
137
138
139
140/*********************************************************************************************************************************
141* Internal Functions *
142*********************************************************************************************************************************/
143static FNDBGFHANDLERINT pgmR3PoolInfoPages;
144static FNDBGFHANDLERINT pgmR3PoolInfoRoots;
145
146#ifdef VBOX_WITH_DEBUGGER
147static FNDBGCCMD pgmR3PoolCmdCheck;
148
149/** Command descriptors. */
150static const DBGCCMD g_aCmds[] =
151{
152 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, fFlags, pfnHandler pszSyntax, ....pszDescription */
153 { "pgmpoolcheck", 0, 0, NULL, 0, 0, pgmR3PoolCmdCheck, "", "Check the pgm pool pages." },
154};
155#endif
156
157/**
158 * Initializes the pool
159 *
160 * @returns VBox status code.
161 * @param pVM The cross context VM structure.
162 */
163int pgmR3PoolInit(PVM pVM)
164{
165 int rc;
166
167 AssertCompile(NIL_PGMPOOL_IDX == 0);
168 /* pPage->cLocked is an unsigned byte. */
169 AssertCompile(VMM_MAX_CPU_COUNT <= 255);
170
171 /*
172 * Query Pool config.
173 */
174 PCFGMNODE pCfg = CFGMR3GetChild(CFGMR3GetRoot(pVM), "/PGM/Pool");
175
176 /* Default pgm pool size is 1024 pages (4MB). */
177 uint16_t cMaxPages = 1024;
178
179 /* Adjust it up relative to the RAM size, using the nested paging formula. */
180 uint64_t cbRam;
181 rc = CFGMR3QueryU64Def(CFGMR3GetRoot(pVM), "RamSize", &cbRam, 0); AssertRCReturn(rc, rc);
182 /** @todo guest x86 specific */
183 uint64_t u64MaxPages = (cbRam >> 9)
184 + (cbRam >> 18)
185 + (cbRam >> 27)
186 + 32 * GUEST_PAGE_SIZE;
187 u64MaxPages >>= GUEST_PAGE_SHIFT;
188 if (u64MaxPages > PGMPOOL_IDX_LAST)
189 cMaxPages = PGMPOOL_IDX_LAST;
190 else
191 cMaxPages = (uint16_t)u64MaxPages;
192
193 /** @cfgm{/PGM/Pool/MaxPages, uint16_t, \#pages, 16, 0x3fff, F(ram-size)}
194 * The max size of the shadow page pool in pages. The pool will grow dynamically
195 * up to this limit.
196 */
197 rc = CFGMR3QueryU16Def(pCfg, "MaxPages", &cMaxPages, cMaxPages);
198 AssertLogRelRCReturn(rc, rc);
199 AssertLogRelMsgReturn(cMaxPages <= PGMPOOL_IDX_LAST && cMaxPages >= RT_ALIGN(PGMPOOL_IDX_FIRST, 16),
200 ("cMaxPages=%u (%#x)\n", cMaxPages, cMaxPages), VERR_INVALID_PARAMETER);
201 AssertCompile(RT_IS_POWER_OF_TWO(PGMPOOL_CFG_MAX_GROW));
202 if (cMaxPages < PGMPOOL_IDX_LAST)
203 cMaxPages = RT_ALIGN(cMaxPages, PGMPOOL_CFG_MAX_GROW / 2);
204 if (cMaxPages > PGMPOOL_IDX_LAST)
205 cMaxPages = PGMPOOL_IDX_LAST;
206 LogRel(("PGM: PGMPool: cMaxPages=%u (u64MaxPages=%llu)\n", cMaxPages, u64MaxPages));
207
208 /** @todo
209 * We need to be much more careful with our allocation strategy here.
210 * For nested paging we don't need pool user info nor extents at all, but
211 * we can't check for nested paging here (too early during init to get a
212 * confirmation it can be used). The default for large memory configs is a
213 * bit large for shadow paging, so I've restricted the extent maximum to 8k
214 * (8k * 16 = 128k of hyper heap).
215 *
216 * Also when large page support is enabled, we typically don't need so much,
217 * although that depends on the availability of 2 MB chunks on the host.
218 */
219
220 /** @cfgm{/PGM/Pool/MaxUsers, uint16_t, \#users, MaxUsers, 32K, MaxPages*2}
221 * The max number of shadow page user tracking records. Each shadow page has
222 * zero of other shadow pages (or CR3s) that references it, or uses it if you
223 * like. The structures describing these relationships are allocated from a
224 * fixed sized pool. This configuration variable defines the pool size.
225 */
226 uint16_t cMaxUsers;
227 rc = CFGMR3QueryU16Def(pCfg, "MaxUsers", &cMaxUsers, cMaxPages * 2);
228 AssertLogRelRCReturn(rc, rc);
229 AssertLogRelMsgReturn(cMaxUsers >= cMaxPages && cMaxPages <= _32K,
230 ("cMaxUsers=%u (%#x)\n", cMaxUsers, cMaxUsers), VERR_INVALID_PARAMETER);
231
232 /** @cfgm{/PGM/Pool/MaxPhysExts, uint16_t, \#extents, 16, MaxPages * 2, MIN(MaxPages*2\,8192)}
233 * The max number of extents for tracking aliased guest pages.
234 */
235 uint16_t cMaxPhysExts;
236 rc = CFGMR3QueryU16Def(pCfg, "MaxPhysExts", &cMaxPhysExts,
237 RT_MIN(cMaxPages * 2, 8192 /* 8Ki max as this eat too much hyper heap */));
238 AssertLogRelRCReturn(rc, rc);
239 AssertLogRelMsgReturn(cMaxPhysExts >= 16 && cMaxPhysExts <= PGMPOOL_IDX_LAST,
240 ("cMaxPhysExts=%u (%#x)\n", cMaxPhysExts, cMaxPhysExts), VERR_INVALID_PARAMETER);
241
242 /** @cfgm{/PGM/Pool/ChacheEnabled, bool, true}
243 * Enables or disabling caching of shadow pages. Caching means that we will try
244 * reuse shadow pages instead of recreating them everything SyncCR3, SyncPT or
245 * SyncPage requests one. When reusing a shadow page, we can save time
246 * reconstructing it and it's children.
247 */
248 bool fCacheEnabled;
249 rc = CFGMR3QueryBoolDef(pCfg, "CacheEnabled", &fCacheEnabled, true);
250 AssertLogRelRCReturn(rc, rc);
251
252 LogRel(("PGM: pgmR3PoolInit: cMaxPages=%#RX16 cMaxUsers=%#RX16 cMaxPhysExts=%#RX16 fCacheEnable=%RTbool\n",
253 cMaxPages, cMaxUsers, cMaxPhysExts, fCacheEnabled));
254
255 /*
256 * Allocate the data structures.
257 */
258 uint32_t cb = RT_UOFFSETOF_DYN(PGMPOOL, aPages[cMaxPages]);
259 cb += cMaxUsers * sizeof(PGMPOOLUSER);
260 cb += cMaxPhysExts * sizeof(PGMPOOLPHYSEXT);
261 PPGMPOOL pPool;
262 RTR0PTR pPoolR0;
263 rc = SUPR3PageAllocEx(RT_ALIGN_32(cb, HOST_PAGE_SIZE) >> HOST_PAGE_SHIFT, 0 /*fFlags*/, (void **)&pPool, &pPoolR0, NULL);
264 if (RT_FAILURE(rc))
265 return rc;
266 Assert(ASMMemIsZero(pPool, cb));
267 pVM->pgm.s.pPoolR3 = pPool->pPoolR3 = pPool;
268 pVM->pgm.s.pPoolR0 = pPool->pPoolR0 = pPoolR0;
269
270 /*
271 * Initialize it.
272 */
273 pPool->pVMR3 = pVM;
274 pPool->pVMR0 = pVM->pVMR0ForCall;
275 pPool->cMaxPages = cMaxPages;
276 pPool->cCurPages = PGMPOOL_IDX_FIRST;
277 pPool->iUserFreeHead = 0;
278 pPool->cMaxUsers = cMaxUsers;
279 PPGMPOOLUSER paUsers = (PPGMPOOLUSER)&pPool->aPages[pPool->cMaxPages];
280 pPool->paUsersR3 = paUsers;
281 pPool->paUsersR0 = pPoolR0 + (uintptr_t)paUsers - (uintptr_t)pPool;
282 for (unsigned i = 0; i < cMaxUsers; i++)
283 {
284 paUsers[i].iNext = i + 1;
285 paUsers[i].iUser = NIL_PGMPOOL_IDX;
286 paUsers[i].iUserTable = 0xfffffffe;
287 }
288 paUsers[cMaxUsers - 1].iNext = NIL_PGMPOOL_USER_INDEX;
289 pPool->iPhysExtFreeHead = 0;
290 pPool->cMaxPhysExts = cMaxPhysExts;
291 PPGMPOOLPHYSEXT paPhysExts = (PPGMPOOLPHYSEXT)&paUsers[cMaxUsers];
292 pPool->paPhysExtsR3 = paPhysExts;
293 pPool->paPhysExtsR0 = pPoolR0 + (uintptr_t)paPhysExts - (uintptr_t)pPool;
294 for (unsigned i = 0; i < cMaxPhysExts; i++)
295 {
296 paPhysExts[i].iNext = i + 1;
297 paPhysExts[i].aidx[0] = NIL_PGMPOOL_IDX;
298 paPhysExts[i].apte[0] = NIL_PGMPOOL_PHYSEXT_IDX_PTE;
299 paPhysExts[i].aidx[1] = NIL_PGMPOOL_IDX;
300 paPhysExts[i].apte[1] = NIL_PGMPOOL_PHYSEXT_IDX_PTE;
301 paPhysExts[i].aidx[2] = NIL_PGMPOOL_IDX;
302 paPhysExts[i].apte[2] = NIL_PGMPOOL_PHYSEXT_IDX_PTE;
303 }
304 paPhysExts[cMaxPhysExts - 1].iNext = NIL_PGMPOOL_PHYSEXT_INDEX;
305 for (unsigned i = 0; i < RT_ELEMENTS(pPool->aiHash); i++)
306 pPool->aiHash[i] = NIL_PGMPOOL_IDX;
307 pPool->iAgeHead = NIL_PGMPOOL_IDX;
308 pPool->iAgeTail = NIL_PGMPOOL_IDX;
309 pPool->fCacheEnabled = fCacheEnabled;
310
311 pPool->hAccessHandlerType = NIL_PGMPHYSHANDLERTYPE;
312 rc = PGMR3HandlerPhysicalTypeRegister(pVM, PGMPHYSHANDLERKIND_WRITE, PGMPHYSHANDLER_F_KEEP_PGM_LOCK,
313 pgmPoolAccessHandler, "Guest Paging Access Handler", &pPool->hAccessHandlerType);
314 AssertLogRelRCReturn(rc, rc);
315
316 pPool->HCPhysTree = 0;
317
318 /*
319 * The NIL entry.
320 */
321 Assert(NIL_PGMPOOL_IDX == 0);
322 pPool->aPages[NIL_PGMPOOL_IDX].enmKind = PGMPOOLKIND_INVALID;
323 pPool->aPages[NIL_PGMPOOL_IDX].idx = NIL_PGMPOOL_IDX;
324 pPool->aPages[NIL_PGMPOOL_IDX].Core.Key = NIL_RTHCPHYS;
325 pPool->aPages[NIL_PGMPOOL_IDX].GCPhys = NIL_RTGCPHYS;
326 pPool->aPages[NIL_PGMPOOL_IDX].iNext = NIL_PGMPOOL_IDX;
327 /* pPool->aPages[NIL_PGMPOOL_IDX].cLocked = INT32_MAX; - test this out... */
328 pPool->aPages[NIL_PGMPOOL_IDX].pvPageR3 = 0;
329 pPool->aPages[NIL_PGMPOOL_IDX].iUserHead = NIL_PGMPOOL_USER_INDEX;
330 pPool->aPages[NIL_PGMPOOL_IDX].iModifiedNext = NIL_PGMPOOL_IDX;
331 pPool->aPages[NIL_PGMPOOL_IDX].iModifiedPrev = NIL_PGMPOOL_IDX;
332 pPool->aPages[NIL_PGMPOOL_IDX].iMonitoredNext = NIL_PGMPOOL_IDX;
333 pPool->aPages[NIL_PGMPOOL_IDX].iMonitoredPrev = NIL_PGMPOOL_IDX;
334 pPool->aPages[NIL_PGMPOOL_IDX].iAgeNext = NIL_PGMPOOL_IDX;
335 pPool->aPages[NIL_PGMPOOL_IDX].iAgePrev = NIL_PGMPOOL_IDX;
336
337 Assert(pPool->aPages[NIL_PGMPOOL_IDX].idx == NIL_PGMPOOL_IDX);
338 Assert(pPool->aPages[NIL_PGMPOOL_IDX].GCPhys == NIL_RTGCPHYS);
339 Assert(!pPool->aPages[NIL_PGMPOOL_IDX].fSeenNonGlobal);
340 Assert(!pPool->aPages[NIL_PGMPOOL_IDX].fMonitored);
341 Assert(!pPool->aPages[NIL_PGMPOOL_IDX].fCached);
342 Assert(!pPool->aPages[NIL_PGMPOOL_IDX].fZeroed);
343 Assert(!pPool->aPages[NIL_PGMPOOL_IDX].fReusedFlushPending);
344
345 /*
346 * Register statistics.
347 */
348 STAM_REL_REG(pVM, &pPool->StatGrow, STAMTYPE_PROFILE, "/PGM/Pool/Grow", STAMUNIT_TICKS_PER_CALL, "Profiling PGMR0PoolGrow");
349#ifdef VBOX_WITH_STATISTICS
350 STAM_REG(pVM, &pPool->cCurPages, STAMTYPE_U16, "/PGM/Pool/cCurPages", STAMUNIT_PAGES, "Current pool size.");
351 STAM_REG(pVM, &pPool->cMaxPages, STAMTYPE_U16, "/PGM/Pool/cMaxPages", STAMUNIT_PAGES, "Max pool size.");
352 STAM_REG(pVM, &pPool->cUsedPages, STAMTYPE_U16, "/PGM/Pool/cUsedPages", STAMUNIT_PAGES, "The number of pages currently in use.");
353 STAM_REG(pVM, &pPool->cUsedPagesHigh, STAMTYPE_U16_RESET, "/PGM/Pool/cUsedPagesHigh", STAMUNIT_PAGES, "The high watermark for cUsedPages.");
354 STAM_REG(pVM, &pPool->StatAlloc, STAMTYPE_PROFILE_ADV, "/PGM/Pool/Alloc", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolAlloc.");
355 STAM_REG(pVM, &pPool->StatClearAll, STAMTYPE_PROFILE, "/PGM/Pool/ClearAll", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmR3PoolClearAll.");
356 STAM_REG(pVM, &pPool->StatR3Reset, STAMTYPE_PROFILE, "/PGM/Pool/R3Reset", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmR3PoolReset.");
357 STAM_REG(pVM, &pPool->StatFlushPage, STAMTYPE_PROFILE, "/PGM/Pool/FlushPage", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolFlushPage.");
358 STAM_REG(pVM, &pPool->StatFree, STAMTYPE_PROFILE, "/PGM/Pool/Free", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolFree.");
359 STAM_REG(pVM, &pPool->StatForceFlushPage, STAMTYPE_COUNTER, "/PGM/Pool/FlushForce", STAMUNIT_OCCURENCES, "Counting explicit flushes by PGMPoolFlushPage().");
360 STAM_REG(pVM, &pPool->StatForceFlushDirtyPage, STAMTYPE_COUNTER, "/PGM/Pool/FlushForceDirty", STAMUNIT_OCCURENCES, "Counting explicit flushes of dirty pages by PGMPoolFlushPage().");
361 STAM_REG(pVM, &pPool->StatForceFlushReused, STAMTYPE_COUNTER, "/PGM/Pool/FlushReused", STAMUNIT_OCCURENCES, "Counting flushes for reused pages.");
362 STAM_REG(pVM, &pPool->StatZeroPage, STAMTYPE_PROFILE, "/PGM/Pool/ZeroPage", STAMUNIT_TICKS_PER_CALL, "Profiling time spent zeroing pages. Overlaps with Alloc.");
363 STAM_REG(pVM, &pPool->cMaxUsers, STAMTYPE_U16, "/PGM/Pool/Track/cMaxUsers", STAMUNIT_COUNT, "Max user tracking records.");
364 STAM_REG(pVM, &pPool->cPresent, STAMTYPE_U32, "/PGM/Pool/Track/cPresent", STAMUNIT_COUNT, "Number of present page table entries.");
365 STAM_REG(pVM, &pPool->StatTrackDeref, STAMTYPE_PROFILE, "/PGM/Pool/Track/Deref", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolTrackDeref.");
366 STAM_REG(pVM, &pPool->StatTrackFlushGCPhysPT, STAMTYPE_PROFILE, "/PGM/Pool/Track/FlushGCPhysPT", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolTrackFlushGCPhysPT.");
367 STAM_REG(pVM, &pPool->StatTrackFlushGCPhysPTs, STAMTYPE_PROFILE, "/PGM/Pool/Track/FlushGCPhysPTs", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolTrackFlushGCPhysPTs.");
368 STAM_REG(pVM, &pPool->StatTrackFlushGCPhysPTsSlow, STAMTYPE_PROFILE, "/PGM/Pool/Track/FlushGCPhysPTsSlow", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmPoolTrackFlushGCPhysPTsSlow.");
369 STAM_REG(pVM, &pPool->StatTrackFlushEntry, STAMTYPE_COUNTER, "/PGM/Pool/Track/Entry/Flush", STAMUNIT_COUNT, "Nr of flushed entries.");
370 STAM_REG(pVM, &pPool->StatTrackFlushEntryKeep, STAMTYPE_COUNTER, "/PGM/Pool/Track/Entry/Update", STAMUNIT_COUNT, "Nr of updated entries.");
371 STAM_REG(pVM, &pPool->StatTrackFreeUpOneUser, STAMTYPE_COUNTER, "/PGM/Pool/Track/FreeUpOneUser", STAMUNIT_TICKS_PER_CALL, "The number of times we were out of user tracking records.");
372 STAM_REG(pVM, &pPool->StatTrackDerefGCPhys, STAMTYPE_PROFILE, "/PGM/Pool/Track/DrefGCPhys", STAMUNIT_TICKS_PER_CALL, "Profiling deref activity related tracking GC physical pages.");
373 STAM_REG(pVM, &pPool->StatTrackLinearRamSearches, STAMTYPE_COUNTER, "/PGM/Pool/Track/LinearRamSearches", STAMUNIT_OCCURENCES, "The number of times we had to do linear ram searches.");
374 STAM_REG(pVM, &pPool->StamTrackPhysExtAllocFailures,STAMTYPE_COUNTER, "/PGM/Pool/Track/PhysExtAllocFailures", STAMUNIT_OCCURENCES, "The number of failing pgmPoolTrackPhysExtAlloc calls.");
375
376 STAM_REG(pVM, &pPool->StatMonitorPfRZ, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/RZ/#PF", STAMUNIT_TICKS_PER_CALL, "Profiling the RC/R0 #PF access handler.");
377 STAM_REG(pVM, &pPool->StatMonitorPfRZEmulateInstr, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/EmulateInstr", STAMUNIT_OCCURENCES, "Times we've failed interpreting the instruction.");
378 STAM_REG(pVM, &pPool->StatMonitorPfRZFlushPage, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/RZ/#PF/FlushPage", STAMUNIT_TICKS_PER_CALL, "Profiling the pgmPoolFlushPage calls made from the RC/R0 access handler.");
379 STAM_REG(pVM, &pPool->StatMonitorPfRZFlushReinit, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/FlushReinit", STAMUNIT_OCCURENCES, "Times we've detected a page table reinit.");
380 STAM_REG(pVM, &pPool->StatMonitorPfRZFlushModOverflow,STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/FlushOverflow", STAMUNIT_OCCURENCES, "Counting flushes for pages that are modified too often.");
381 STAM_REG(pVM, &pPool->StatMonitorPfRZFork, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/Fork", STAMUNIT_OCCURENCES, "Times we've detected fork().");
382 STAM_REG(pVM, &pPool->StatMonitorPfRZHandled, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/RZ/#PF/Handled", STAMUNIT_TICKS_PER_CALL, "Profiling the RC/R0 #PF access we've handled (except REP STOSD).");
383 STAM_REG(pVM, &pPool->StatMonitorPfRZIntrFailPatch1, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/IntrFailPatch1", STAMUNIT_OCCURENCES, "Times we've failed interpreting a patch code instruction.");
384 STAM_REG(pVM, &pPool->StatMonitorPfRZIntrFailPatch2, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/IntrFailPatch2", STAMUNIT_OCCURENCES, "Times we've failed interpreting a patch code instruction during flushing.");
385 STAM_REG(pVM, &pPool->StatMonitorPfRZRepPrefix, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/#PF/RepPrefix", STAMUNIT_OCCURENCES, "The number of times we've seen rep prefixes we can't handle.");
386 STAM_REG(pVM, &pPool->StatMonitorPfRZRepStosd, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/RZ/#PF/RepStosd", STAMUNIT_TICKS_PER_CALL, "Profiling the REP STOSD cases we've handled.");
387
388 STAM_REG(pVM, &pPool->StatMonitorRZ, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/RZ/IEM", STAMUNIT_TICKS_PER_CALL, "Profiling the regular access handler.");
389 STAM_REG(pVM, &pPool->StatMonitorRZFlushPage, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/RZ/IEM/FlushPage", STAMUNIT_TICKS_PER_CALL, "Profiling the pgmPoolFlushPage calls made from the regular access handler.");
390 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[0], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size01", STAMUNIT_OCCURENCES, "Number of 1 byte accesses.");
391 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[1], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size02", STAMUNIT_OCCURENCES, "Number of 2 byte accesses.");
392 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[2], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size03", STAMUNIT_OCCURENCES, "Number of 3 byte accesses.");
393 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[3], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size04", STAMUNIT_OCCURENCES, "Number of 4 byte accesses.");
394 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[4], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size05", STAMUNIT_OCCURENCES, "Number of 5 byte accesses.");
395 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[5], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size06", STAMUNIT_OCCURENCES, "Number of 6 byte accesses.");
396 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[6], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size07", STAMUNIT_OCCURENCES, "Number of 7 byte accesses.");
397 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[7], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size08", STAMUNIT_OCCURENCES, "Number of 8 byte accesses.");
398 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[8], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size09", STAMUNIT_OCCURENCES, "Number of 9 byte accesses.");
399 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[9], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size0a", STAMUNIT_OCCURENCES, "Number of 10 byte accesses.");
400 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[10], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size0b", STAMUNIT_OCCURENCES, "Number of 11 byte accesses.");
401 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[11], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size0c", STAMUNIT_OCCURENCES, "Number of 12 byte accesses.");
402 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[12], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size0d", STAMUNIT_OCCURENCES, "Number of 13 byte accesses.");
403 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[13], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size0e", STAMUNIT_OCCURENCES, "Number of 14 byte accesses.");
404 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[14], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size0f", STAMUNIT_OCCURENCES, "Number of 15 byte accesses.");
405 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[15], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size10", STAMUNIT_OCCURENCES, "Number of 16 byte accesses.");
406 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[16], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size11-2f", STAMUNIT_OCCURENCES, "Number of 17-31 byte accesses.");
407 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[17], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size20-3f", STAMUNIT_OCCURENCES, "Number of 32-63 byte accesses.");
408 STAM_REG(pVM, &pPool->aStatMonitorRZSizes[18], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Size40+", STAMUNIT_OCCURENCES, "Number of 64+ byte accesses.");
409 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[0], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned1", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 1.");
410 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[1], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned2", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 2.");
411 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[2], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned3", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 3.");
412 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[3], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned4", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 4.");
413 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[4], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned5", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 5.");
414 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[5], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned6", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 6.");
415 STAM_REG(pVM, &pPool->aStatMonitorRZMisaligned[6], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/IEM/Misaligned7", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 7.");
416
417 STAM_REG(pVM, &pPool->StatMonitorRZFaultPT, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/Fault/PT", STAMUNIT_OCCURENCES, "Nr of handled PT faults.");
418 STAM_REG(pVM, &pPool->StatMonitorRZFaultPD, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/Fault/PD", STAMUNIT_OCCURENCES, "Nr of handled PD faults.");
419 STAM_REG(pVM, &pPool->StatMonitorRZFaultPDPT, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/Fault/PDPT", STAMUNIT_OCCURENCES, "Nr of handled PDPT faults.");
420 STAM_REG(pVM, &pPool->StatMonitorRZFaultPML4, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/RZ/Fault/PML4", STAMUNIT_OCCURENCES, "Nr of handled PML4 faults.");
421
422 STAM_REG(pVM, &pPool->StatMonitorR3, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/R3", STAMUNIT_TICKS_PER_CALL, "Profiling the R3 access handler.");
423 STAM_REG(pVM, &pPool->StatMonitorR3FlushPage, STAMTYPE_PROFILE, "/PGM/Pool/Monitor/R3/FlushPage", STAMUNIT_TICKS_PER_CALL, "Profiling the pgmPoolFlushPage calls made from the R3 access handler.");
424 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[0], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size01", STAMUNIT_OCCURENCES, "Number of 1 byte accesses (R3).");
425 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[1], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size02", STAMUNIT_OCCURENCES, "Number of 2 byte accesses (R3).");
426 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[2], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size03", STAMUNIT_OCCURENCES, "Number of 3 byte accesses (R3).");
427 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[3], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size04", STAMUNIT_OCCURENCES, "Number of 4 byte accesses (R3).");
428 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[4], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size05", STAMUNIT_OCCURENCES, "Number of 5 byte accesses (R3).");
429 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[5], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size06", STAMUNIT_OCCURENCES, "Number of 6 byte accesses (R3).");
430 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[6], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size07", STAMUNIT_OCCURENCES, "Number of 7 byte accesses (R3).");
431 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[7], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size08", STAMUNIT_OCCURENCES, "Number of 8 byte accesses (R3).");
432 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[8], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size09", STAMUNIT_OCCURENCES, "Number of 9 byte accesses (R3).");
433 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[9], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size0a", STAMUNIT_OCCURENCES, "Number of 10 byte accesses (R3).");
434 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[10], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size0b", STAMUNIT_OCCURENCES, "Number of 11 byte accesses (R3).");
435 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[11], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size0c", STAMUNIT_OCCURENCES, "Number of 12 byte accesses (R3).");
436 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[12], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size0d", STAMUNIT_OCCURENCES, "Number of 13 byte accesses (R3).");
437 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[13], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size0e", STAMUNIT_OCCURENCES, "Number of 14 byte accesses (R3).");
438 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[14], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size0f", STAMUNIT_OCCURENCES, "Number of 15 byte accesses (R3).");
439 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[15], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size10", STAMUNIT_OCCURENCES, "Number of 16 byte accesses (R3).");
440 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[16], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size11-2f", STAMUNIT_OCCURENCES, "Number of 17-31 byte accesses.");
441 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[17], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size20-3f", STAMUNIT_OCCURENCES, "Number of 32-63 byte accesses.");
442 STAM_REG(pVM, &pPool->aStatMonitorR3Sizes[18], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Size40+", STAMUNIT_OCCURENCES, "Number of 64+ byte accesses.");
443 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[0], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned1", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 1 in R3.");
444 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[1], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned2", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 2 in R3.");
445 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[2], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned3", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 3 in R3.");
446 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[3], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned4", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 4 in R3.");
447 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[4], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned5", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 5 in R3.");
448 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[5], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned6", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 6 in R3.");
449 STAM_REG(pVM, &pPool->aStatMonitorR3Misaligned[6], STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Misaligned7", STAMUNIT_OCCURENCES, "Number of misaligned access with offset 7 in R3.");
450
451 STAM_REG(pVM, &pPool->StatMonitorR3FaultPT, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Fault/PT", STAMUNIT_OCCURENCES, "Nr of handled PT faults.");
452 STAM_REG(pVM, &pPool->StatMonitorR3FaultPD, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Fault/PD", STAMUNIT_OCCURENCES, "Nr of handled PD faults.");
453 STAM_REG(pVM, &pPool->StatMonitorR3FaultPDPT, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Fault/PDPT", STAMUNIT_OCCURENCES, "Nr of handled PDPT faults.");
454 STAM_REG(pVM, &pPool->StatMonitorR3FaultPML4, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/R3/Fault/PML4", STAMUNIT_OCCURENCES, "Nr of handled PML4 faults.");
455
456 STAM_REG(pVM, &pPool->cModifiedPages, STAMTYPE_U16, "/PGM/Pool/Monitor/cModifiedPages", STAMUNIT_PAGES, "The current cModifiedPages value.");
457 STAM_REG(pVM, &pPool->cModifiedPagesHigh, STAMTYPE_U16_RESET, "/PGM/Pool/Monitor/cModifiedPagesHigh", STAMUNIT_PAGES, "The high watermark for cModifiedPages.");
458 STAM_REG(pVM, &pPool->StatResetDirtyPages, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/Dirty/Resets", STAMUNIT_OCCURENCES, "Times we've called pgmPoolResetDirtyPages (and there were dirty page).");
459 STAM_REG(pVM, &pPool->StatDirtyPage, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/Dirty/Pages", STAMUNIT_OCCURENCES, "Times we've called pgmPoolAddDirtyPage.");
460 STAM_REG(pVM, &pPool->StatDirtyPageDupFlush, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/Dirty/FlushDup", STAMUNIT_OCCURENCES, "Times we've had to flush duplicates for dirty page management.");
461 STAM_REG(pVM, &pPool->StatDirtyPageOverFlowFlush, STAMTYPE_COUNTER, "/PGM/Pool/Monitor/Dirty/FlushOverflow",STAMUNIT_OCCURENCES, "Times we've had to flush because of overflow.");
462 STAM_REG(pVM, &pPool->StatCacheHits, STAMTYPE_COUNTER, "/PGM/Pool/Cache/Hits", STAMUNIT_OCCURENCES, "The number of pgmPoolAlloc calls satisfied by the cache.");
463 STAM_REG(pVM, &pPool->StatCacheMisses, STAMTYPE_COUNTER, "/PGM/Pool/Cache/Misses", STAMUNIT_OCCURENCES, "The number of pgmPoolAlloc calls not statisfied by the cache.");
464 STAM_REG(pVM, &pPool->StatCacheKindMismatches, STAMTYPE_COUNTER, "/PGM/Pool/Cache/KindMismatches", STAMUNIT_OCCURENCES, "The number of shadow page kind mismatches. (Better be low, preferably 0!)");
465 STAM_REG(pVM, &pPool->StatCacheFreeUpOne, STAMTYPE_COUNTER, "/PGM/Pool/Cache/FreeUpOne", STAMUNIT_OCCURENCES, "The number of times the cache was asked to free up a page.");
466 STAM_REG(pVM, &pPool->StatCacheCacheable, STAMTYPE_COUNTER, "/PGM/Pool/Cache/Cacheable", STAMUNIT_OCCURENCES, "The number of cacheable allocations.");
467 STAM_REG(pVM, &pPool->StatCacheUncacheable, STAMTYPE_COUNTER, "/PGM/Pool/Cache/Uncacheable", STAMUNIT_OCCURENCES, "The number of uncacheable allocations.");
468#endif /* VBOX_WITH_STATISTICS */
469
470 DBGFR3InfoRegisterInternalEx(pVM, "pgmpoolpages", "Lists page pool pages.", pgmR3PoolInfoPages, 0);
471 DBGFR3InfoRegisterInternalEx(pVM, "pgmpoolroots", "Lists page pool roots.", pgmR3PoolInfoRoots, 0);
472
473#ifdef VBOX_WITH_DEBUGGER
474 /*
475 * Debugger commands.
476 */
477 static bool s_fRegisteredCmds = false;
478 if (!s_fRegisteredCmds)
479 {
480 rc = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
481 if (RT_SUCCESS(rc))
482 s_fRegisteredCmds = true;
483 }
484#endif
485
486 return VINF_SUCCESS;
487}
488
489
490/**
491 * Relocate the page pool data.
492 *
493 * @param pVM The cross context VM structure.
494 */
495void pgmR3PoolRelocate(PVM pVM)
496{
497 RT_NOREF(pVM);
498}
499
500
501/**
502 * Grows the shadow page pool.
503 *
504 * I.e. adds more pages to it, assuming that hasn't reached cMaxPages yet.
505 *
506 * @returns VBox status code.
507 * @param pVM The cross context VM structure.
508 * @param pVCpu The cross context virtual CPU structure of the calling EMT.
509 */
510VMMR3_INT_DECL(int) PGMR3PoolGrow(PVM pVM, PVMCPU pVCpu)
511{
512 /* This used to do a lot of stuff, but it has moved to ring-0 (PGMR0PoolGrow). */
513 AssertReturn(pVM->pgm.s.pPoolR3->cCurPages < pVM->pgm.s.pPoolR3->cMaxPages, VERR_PGM_POOL_MAXED_OUT_ALREADY);
514 int rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_PGM_POOL_GROW, 0, NULL);
515 if (rc == VINF_SUCCESS)
516 return rc;
517 LogRel(("PGMR3PoolGrow: rc=%Rrc cCurPages=%#x cMaxPages=%#x\n",
518 rc, pVM->pgm.s.pPoolR3->cCurPages, pVM->pgm.s.pPoolR3->cMaxPages));
519 if (pVM->pgm.s.pPoolR3->cCurPages > 128 && RT_FAILURE_NP(rc))
520 return -rc;
521 return rc;
522}
523
524
525/**
526 * Rendezvous callback used by pgmR3PoolClearAll that clears all shadow pages
527 * and all modification counters.
528 *
529 * This is only called on one of the EMTs while the other ones are waiting for
530 * it to complete this function.
531 *
532 * @returns VINF_SUCCESS (VBox strict status code).
533 * @param pVM The cross context VM structure.
534 * @param pVCpu The cross context virtual CPU structure of the calling EMT. Unused.
535 * @param fpvFlushRemTlb When not NULL, we'll flush the REM TLB as well.
536 * (This is the pvUser, so it has to be void *.)
537 *
538 */
539DECLCALLBACK(VBOXSTRICTRC) pgmR3PoolClearAllRendezvous(PVM pVM, PVMCPU pVCpu, void *fpvFlushRemTlb)
540{
541 PPGMPOOL pPool = pVM->pgm.s.CTX_SUFF(pPool);
542 STAM_PROFILE_START(&pPool->StatClearAll, c);
543 NOREF(pVCpu);
544
545 PGM_LOCK_VOID(pVM);
546 Log(("pgmR3PoolClearAllRendezvous: cUsedPages=%d fpvFlushRemTlb=%RTbool\n", pPool->cUsedPages, !!fpvFlushRemTlb));
547
548 /*
549 * Iterate all the pages until we've encountered all that are in use.
550 * This is a simple but not quite optimal solution.
551 */
552 unsigned cModifiedPages = 0; NOREF(cModifiedPages);
553 unsigned cLeft = pPool->cUsedPages;
554 uint32_t iPage = pPool->cCurPages;
555 while (--iPage >= PGMPOOL_IDX_FIRST)
556 {
557 PPGMPOOLPAGE pPage = &pPool->aPages[iPage];
558 if (pPage->GCPhys != NIL_RTGCPHYS)
559 {
560 switch (pPage->enmKind)
561 {
562 /*
563 * We only care about shadow page tables that reference physical memory
564 */
565#ifdef PGM_WITH_LARGE_PAGES
566 case PGMPOOLKIND_PAE_PD_PHYS: /* Large pages reference 2 MB of physical memory, so we must clear them. */
567 if (pPage->cPresent)
568 {
569 PX86PDPAE pShwPD = (PX86PDPAE)PGMPOOL_PAGE_2_PTR_V2(pPool->CTX_SUFF(pVM), pVCpu, pPage);
570 for (unsigned i = 0; i < RT_ELEMENTS(pShwPD->a); i++)
571 {
572 //Assert((pShwPD->a[i].u & UINT64_C(0xfff0000000000f80)) == 0); - bogus, includes X86_PDE_PS.
573 if ((pShwPD->a[i].u & (X86_PDE_P | X86_PDE_PS)) == (X86_PDE_P | X86_PDE_PS))
574 {
575 pShwPD->a[i].u = 0;
576 Assert(pPage->cPresent);
577 pPage->cPresent--;
578 }
579 }
580 if (pPage->cPresent == 0)
581 pPage->iFirstPresent = NIL_PGMPOOL_PRESENT_INDEX;
582 }
583 goto default_case;
584
585 case PGMPOOLKIND_EPT_PD_FOR_PHYS: /* Large pages reference 2 MB of physical memory, so we must clear them. */
586 if (pPage->cPresent)
587 {
588 PEPTPD pShwPD = (PEPTPD)PGMPOOL_PAGE_2_PTR_V2(pPool->CTX_SUFF(pVM), pVCpu, pPage);
589 for (unsigned i = 0; i < RT_ELEMENTS(pShwPD->a); i++)
590 {
591 if ((pShwPD->a[i].u & (EPT_E_READ | EPT_E_LEAF)) == (EPT_E_READ | EPT_E_LEAF))
592 {
593 pShwPD->a[i].u = 0;
594 Assert(pPage->cPresent);
595 pPage->cPresent--;
596 }
597 }
598 if (pPage->cPresent == 0)
599 pPage->iFirstPresent = NIL_PGMPOOL_PRESENT_INDEX;
600 }
601 goto default_case;
602
603# ifdef VBOX_WITH_NESTED_HWVIRT_VMX_EPT
604 case PGMPOOLKIND_EPT_PD_FOR_EPT_PD: /* Large pages reference 2 MB of physical memory, so we must clear them. */
605 if (pPage->cPresent)
606 {
607 PEPTPD pShwPD = (PEPTPD)PGMPOOL_PAGE_2_PTR_V2(pPool->CTX_SUFF(pVM), pVCpu, pPage);
608 for (unsigned i = 0; i < RT_ELEMENTS(pShwPD->a); i++)
609 {
610 if ( (pShwPD->a[i].u & EPT_PRESENT_MASK)
611 && (pShwPD->a[i].u & EPT_E_LEAF))
612 {
613 pShwPD->a[i].u = 0;
614 Assert(pPage->cPresent);
615 pPage->cPresent--;
616 }
617 }
618 if (pPage->cPresent == 0)
619 pPage->iFirstPresent = NIL_PGMPOOL_PRESENT_INDEX;
620 }
621 goto default_case;
622# endif /* VBOX_WITH_NESTED_HWVIRT_VMX_EPT */
623#endif /* PGM_WITH_LARGE_PAGES */
624
625 case PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT:
626 case PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB:
627 case PGMPOOLKIND_PAE_PT_FOR_32BIT_PT:
628 case PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB:
629 case PGMPOOLKIND_PAE_PT_FOR_PAE_PT:
630 case PGMPOOLKIND_PAE_PT_FOR_PAE_2MB:
631 case PGMPOOLKIND_32BIT_PT_FOR_PHYS:
632 case PGMPOOLKIND_PAE_PT_FOR_PHYS:
633 case PGMPOOLKIND_EPT_PT_FOR_PHYS:
634#ifdef VBOX_WITH_NESTED_HWVIRT_VMX_EPT
635 case PGMPOOLKIND_EPT_PT_FOR_EPT_PT:
636 case PGMPOOLKIND_EPT_PT_FOR_EPT_2MB:
637 case PGMPOOLKIND_EPT_PDPT_FOR_EPT_PDPT:
638 case PGMPOOLKIND_EPT_PML4_FOR_EPT_PML4:
639#endif
640 {
641 if (pPage->cPresent)
642 {
643 void *pvShw = PGMPOOL_PAGE_2_PTR_V2(pPool->CTX_SUFF(pVM), pVCpu, pPage);
644 STAM_PROFILE_START(&pPool->StatZeroPage, z);
645#if 0
646 /* Useful check for leaking references; *very* expensive though. */
647 switch (pPage->enmKind)
648 {
649 case PGMPOOLKIND_PAE_PT_FOR_32BIT_PT:
650 case PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB:
651 case PGMPOOLKIND_PAE_PT_FOR_PAE_PT:
652 case PGMPOOLKIND_PAE_PT_FOR_PAE_2MB:
653 case PGMPOOLKIND_PAE_PT_FOR_PHYS:
654 {
655 bool fFoundFirst = false;
656 PPGMSHWPTPAE pPT = (PPGMSHWPTPAE)pvShw;
657 for (unsigned ptIndex = 0; ptIndex < RT_ELEMENTS(pPT->a); ptIndex++)
658 {
659 if (pPT->a[ptIndex].u)
660 {
661 if (!fFoundFirst)
662 {
663 AssertFatalMsg(pPage->iFirstPresent <= ptIndex, ("ptIndex = %d first present = %d\n", ptIndex, pPage->iFirstPresent));
664 if (pPage->iFirstPresent != ptIndex)
665 Log(("ptIndex = %d first present = %d\n", ptIndex, pPage->iFirstPresent));
666 fFoundFirst = true;
667 }
668 if (PGMSHWPTEPAE_IS_P(pPT->a[ptIndex]))
669 {
670 pgmPoolTracDerefGCPhysHint(pPool, pPage, PGMSHWPTEPAE_GET_HCPHYS(pPT->a[ptIndex]), NIL_RTGCPHYS);
671 if (pPage->iFirstPresent == ptIndex)
672 pPage->iFirstPresent = NIL_PGMPOOL_PRESENT_INDEX;
673 }
674 }
675 }
676 AssertFatalMsg(pPage->cPresent == 0, ("cPresent = %d pPage = %RGv\n", pPage->cPresent, pPage->GCPhys));
677 break;
678 }
679 default:
680 break;
681 }
682#endif
683 RT_BZERO(pvShw, PAGE_SIZE);
684 STAM_PROFILE_STOP(&pPool->StatZeroPage, z);
685 pPage->cPresent = 0;
686 pPage->iFirstPresent = NIL_PGMPOOL_PRESENT_INDEX;
687 }
688 }
689 RT_FALL_THRU();
690 default:
691#ifdef PGM_WITH_LARGE_PAGES
692 default_case:
693#endif
694 Assert(!pPage->cModifications || ++cModifiedPages);
695 Assert(pPage->iModifiedNext == NIL_PGMPOOL_IDX || pPage->cModifications);
696 Assert(pPage->iModifiedPrev == NIL_PGMPOOL_IDX || pPage->cModifications);
697 pPage->iModifiedNext = NIL_PGMPOOL_IDX;
698 pPage->iModifiedPrev = NIL_PGMPOOL_IDX;
699 pPage->cModifications = 0;
700 break;
701
702 }
703 if (!--cLeft)
704 break;
705 }
706 }
707
708#ifndef DEBUG_michael
709 AssertMsg(cModifiedPages == pPool->cModifiedPages, ("%d != %d\n", cModifiedPages, pPool->cModifiedPages));
710#endif
711 pPool->iModifiedHead = NIL_PGMPOOL_IDX;
712 pPool->cModifiedPages = 0;
713
714 /*
715 * Clear all the GCPhys links and rebuild the phys ext free list.
716 */
717 uint32_t const idRamRangeMax = RT_MIN(pVM->pgm.s.idRamRangeMax, RT_ELEMENTS(pVM->pgm.s.apRamRanges) - 1U);
718 Assert(pVM->pgm.s.apRamRanges[0] == NULL);
719 for (uint32_t idx = 1; idx <= idRamRangeMax; idx++)
720 {
721 PPGMRAMRANGE const pRam = pVM->pgm.s.apRamRanges[idx];
722 iPage = pRam->cb >> GUEST_PAGE_SHIFT;
723 while (iPage-- > 0)
724 PGM_PAGE_SET_TRACKING(pVM, &pRam->aPages[iPage], 0);
725 }
726
727 pPool->iPhysExtFreeHead = 0;
728 PPGMPOOLPHYSEXT paPhysExts = pPool->CTX_SUFF(paPhysExts);
729 const unsigned cMaxPhysExts = pPool->cMaxPhysExts;
730 for (unsigned i = 0; i < cMaxPhysExts; i++)
731 {
732 paPhysExts[i].iNext = i + 1;
733 paPhysExts[i].aidx[0] = NIL_PGMPOOL_IDX;
734 paPhysExts[i].apte[0] = NIL_PGMPOOL_PHYSEXT_IDX_PTE;
735 paPhysExts[i].aidx[1] = NIL_PGMPOOL_IDX;
736 paPhysExts[i].apte[1] = NIL_PGMPOOL_PHYSEXT_IDX_PTE;
737 paPhysExts[i].aidx[2] = NIL_PGMPOOL_IDX;
738 paPhysExts[i].apte[2] = NIL_PGMPOOL_PHYSEXT_IDX_PTE;
739 }
740 paPhysExts[cMaxPhysExts - 1].iNext = NIL_PGMPOOL_PHYSEXT_INDEX;
741
742
743#ifdef PGMPOOL_WITH_OPTIMIZED_DIRTY_PT
744 /* Reset all dirty pages to reactivate the page monitoring. */
745 /* Note: we must do this *after* clearing all page references and shadow page tables as there might be stale references to
746 * recently removed MMIO ranges around that might otherwise end up asserting in pgmPoolTracDerefGCPhysHint
747 */
748 for (unsigned i = 0; i < RT_ELEMENTS(pPool->aDirtyPages); i++)
749 {
750 unsigned idxPage = pPool->aidxDirtyPages[i];
751 if (idxPage == NIL_PGMPOOL_IDX)
752 continue;
753
754 PPGMPOOLPAGE pPage = &pPool->aPages[idxPage];
755 Assert(pPage->idx == idxPage);
756 Assert(pPage->iMonitoredNext == NIL_PGMPOOL_IDX && pPage->iMonitoredPrev == NIL_PGMPOOL_IDX);
757
758 AssertMsg(pPage->fDirty, ("Page %RGp (slot=%d) not marked dirty!", pPage->GCPhys, i));
759
760 Log(("Reactivate dirty page %RGp\n", pPage->GCPhys));
761
762 /* First write protect the page again to catch all write accesses. (before checking for changes -> SMP) */
763 int rc = PGMHandlerPhysicalReset(pVM, pPage->GCPhys & ~(RTGCPHYS)GUEST_PAGE_OFFSET_MASK);
764 AssertRCSuccess(rc);
765 pPage->fDirty = false;
766
767 pPool->aidxDirtyPages[i] = NIL_PGMPOOL_IDX;
768 }
769
770 /* Clear all dirty pages. */
771 pPool->idxFreeDirtyPage = 0;
772 pPool->cDirtyPages = 0;
773#endif
774
775 /* Clear the PGM_SYNC_CLEAR_PGM_POOL flag on all VCPUs to prevent redundant flushes. */
776 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
777 pVM->apCpusR3[idCpu]->pgm.s.fSyncFlags &= ~PGM_SYNC_CLEAR_PGM_POOL;
778
779 /* Flush job finished. */
780 VM_FF_CLEAR(pVM, VM_FF_PGM_POOL_FLUSH_PENDING);
781 pPool->cPresent = 0;
782 PGM_UNLOCK(pVM);
783
784 PGM_INVL_ALL_VCPU_TLBS(pVM);
785
786 if (fpvFlushRemTlb)
787 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
788 CPUMSetChangedFlags(pVM->apCpusR3[idCpu], CPUM_CHANGED_GLOBAL_TLB_FLUSH);
789
790 STAM_PROFILE_STOP(&pPool->StatClearAll, c);
791 return VINF_SUCCESS;
792}
793
794
795/**
796 * Clears the shadow page pool.
797 *
798 * @param pVM The cross context VM structure.
799 * @param fFlushRemTlb When set, the REM TLB is scheduled for flushing as
800 * well.
801 */
802void pgmR3PoolClearAll(PVM pVM, bool fFlushRemTlb)
803{
804 int rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ONCE, pgmR3PoolClearAllRendezvous, &fFlushRemTlb);
805 AssertRC(rc);
806}
807
808
809/**
810 * Stringifies a PGMPOOLACCESS value.
811 */
812static const char *pgmPoolPoolAccessToStr(uint8_t enmAccess)
813{
814 switch ((PGMPOOLACCESS)enmAccess)
815 {
816 case PGMPOOLACCESS_DONTCARE: return "DONTCARE";
817 case PGMPOOLACCESS_USER_RW: return "USER_RW";
818 case PGMPOOLACCESS_USER_R: return "USER_R";
819 case PGMPOOLACCESS_USER_RW_NX: return "USER_RW_NX";
820 case PGMPOOLACCESS_USER_R_NX: return "USER_R_NX";
821 case PGMPOOLACCESS_SUPERVISOR_RW: return "SUPERVISOR_RW";
822 case PGMPOOLACCESS_SUPERVISOR_R: return "SUPERVISOR_R";
823 case PGMPOOLACCESS_SUPERVISOR_RW_NX: return "SUPERVISOR_RW_NX";
824 case PGMPOOLACCESS_SUPERVISOR_R_NX: return "SUPERVISOR_R_NX";
825 }
826 return "Unknown Access";
827}
828
829
830/**
831 * Stringifies a PGMPOOLKIND value.
832 */
833static const char *pgmPoolPoolKindToStr(uint8_t enmKind)
834{
835 switch ((PGMPOOLKIND)enmKind)
836 {
837 case PGMPOOLKIND_INVALID:
838 return "INVALID";
839 case PGMPOOLKIND_FREE:
840 return "FREE";
841 case PGMPOOLKIND_32BIT_PT_FOR_PHYS:
842 return "32BIT_PT_FOR_PHYS";
843 case PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT:
844 return "32BIT_PT_FOR_32BIT_PT";
845 case PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB:
846 return "32BIT_PT_FOR_32BIT_4MB";
847 case PGMPOOLKIND_PAE_PT_FOR_PHYS:
848 return "PAE_PT_FOR_PHYS";
849 case PGMPOOLKIND_PAE_PT_FOR_32BIT_PT:
850 return "PAE_PT_FOR_32BIT_PT";
851 case PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB:
852 return "PAE_PT_FOR_32BIT_4MB";
853 case PGMPOOLKIND_PAE_PT_FOR_PAE_PT:
854 return "PAE_PT_FOR_PAE_PT";
855 case PGMPOOLKIND_PAE_PT_FOR_PAE_2MB:
856 return "PAE_PT_FOR_PAE_2MB";
857 case PGMPOOLKIND_32BIT_PD:
858 return "32BIT_PD";
859 case PGMPOOLKIND_32BIT_PD_PHYS:
860 return "32BIT_PD_PHYS";
861 case PGMPOOLKIND_PAE_PD0_FOR_32BIT_PD:
862 return "PAE_PD0_FOR_32BIT_PD";
863 case PGMPOOLKIND_PAE_PD1_FOR_32BIT_PD:
864 return "PAE_PD1_FOR_32BIT_PD";
865 case PGMPOOLKIND_PAE_PD2_FOR_32BIT_PD:
866 return "PAE_PD2_FOR_32BIT_PD";
867 case PGMPOOLKIND_PAE_PD3_FOR_32BIT_PD:
868 return "PAE_PD3_FOR_32BIT_PD";
869 case PGMPOOLKIND_PAE_PD_FOR_PAE_PD:
870 return "PAE_PD_FOR_PAE_PD";
871 case PGMPOOLKIND_PAE_PD_PHYS:
872 return "PAE_PD_PHYS";
873 case PGMPOOLKIND_PAE_PDPT_FOR_32BIT:
874 return "PAE_PDPT_FOR_32BIT";
875 case PGMPOOLKIND_PAE_PDPT:
876 return "PAE_PDPT";
877 case PGMPOOLKIND_PAE_PDPT_PHYS:
878 return "PAE_PDPT_PHYS";
879 case PGMPOOLKIND_64BIT_PDPT_FOR_64BIT_PDPT:
880 return "64BIT_PDPT_FOR_64BIT_PDPT";
881 case PGMPOOLKIND_64BIT_PDPT_FOR_PHYS:
882 return "64BIT_PDPT_FOR_PHYS";
883 case PGMPOOLKIND_64BIT_PD_FOR_64BIT_PD:
884 return "64BIT_PD_FOR_64BIT_PD";
885 case PGMPOOLKIND_64BIT_PD_FOR_PHYS:
886 return "64BIT_PD_FOR_PHYS";
887 case PGMPOOLKIND_64BIT_PML4:
888 return "64BIT_PML4";
889 case PGMPOOLKIND_EPT_PDPT_FOR_PHYS:
890 return "EPT_PDPT_FOR_PHYS";
891 case PGMPOOLKIND_EPT_PD_FOR_PHYS:
892 return "EPT_PD_FOR_PHYS";
893 case PGMPOOLKIND_EPT_PT_FOR_PHYS:
894 return "EPT_PT_FOR_PHYS";
895 case PGMPOOLKIND_ROOT_NESTED:
896 return "ROOT_NESTED";
897 case PGMPOOLKIND_EPT_PT_FOR_EPT_PT:
898 return "EPT_PT_FOR_EPT_PT";
899 case PGMPOOLKIND_EPT_PT_FOR_EPT_2MB:
900 return "EPT_PT_FOR_EPT_2MB";
901 case PGMPOOLKIND_EPT_PD_FOR_EPT_PD:
902 return "EPT_PD_FOR_EPT_PD";
903 case PGMPOOLKIND_EPT_PDPT_FOR_EPT_PDPT:
904 return "EPT_PDPT_FOR_EPT_PDPT";
905 case PGMPOOLKIND_EPT_PML4_FOR_EPT_PML4:
906 return "EPT_PML4_FOR_EPT_PML4";
907 }
908 return "Unknown kind!";
909}
910
911
912/**
913 * Protect all pgm pool page table entries to monitor writes
914 *
915 * @param pVM The cross context VM structure.
916 *
917 * @remarks ASSUMES the caller will flush all TLBs!!
918 */
919void pgmR3PoolWriteProtectPages(PVM pVM)
920{
921 PGM_LOCK_ASSERT_OWNER(pVM);
922 PPGMPOOL pPool = pVM->pgm.s.CTX_SUFF(pPool);
923 unsigned cLeft = pPool->cUsedPages;
924 unsigned iPage = pPool->cCurPages;
925 while (--iPage >= PGMPOOL_IDX_FIRST)
926 {
927 PPGMPOOLPAGE pPage = &pPool->aPages[iPage];
928 if ( pPage->GCPhys != NIL_RTGCPHYS
929 && pPage->cPresent)
930 {
931 union
932 {
933 void *pv;
934 PX86PT pPT;
935 PPGMSHWPTPAE pPTPae;
936 PEPTPT pPTEpt;
937 } uShw;
938 uShw.pv = PGMPOOL_PAGE_2_PTR(pVM, pPage);
939
940 switch (pPage->enmKind)
941 {
942 /*
943 * We only care about shadow page tables.
944 */
945 case PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT:
946 case PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB:
947 case PGMPOOLKIND_32BIT_PT_FOR_PHYS:
948 for (unsigned iShw = 0; iShw < RT_ELEMENTS(uShw.pPT->a); iShw++)
949 if (uShw.pPT->a[iShw].u & X86_PTE_P)
950 uShw.pPT->a[iShw].u = ~(X86PGUINT)X86_PTE_RW;
951 break;
952
953 case PGMPOOLKIND_PAE_PT_FOR_32BIT_PT:
954 case PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB:
955 case PGMPOOLKIND_PAE_PT_FOR_PAE_PT:
956 case PGMPOOLKIND_PAE_PT_FOR_PAE_2MB:
957 case PGMPOOLKIND_PAE_PT_FOR_PHYS:
958 for (unsigned iShw = 0; iShw < RT_ELEMENTS(uShw.pPTPae->a); iShw++)
959 if (PGMSHWPTEPAE_IS_P(uShw.pPTPae->a[iShw]))
960 PGMSHWPTEPAE_SET_RO(uShw.pPTPae->a[iShw]);
961 break;
962
963 case PGMPOOLKIND_EPT_PT_FOR_PHYS:
964 for (unsigned iShw = 0; iShw < RT_ELEMENTS(uShw.pPTEpt->a); iShw++)
965 if (uShw.pPTEpt->a[iShw].u & EPT_E_READ)
966 uShw.pPTEpt->a[iShw].u &= ~(X86PGPAEUINT)EPT_E_WRITE;
967 break;
968
969 default:
970 break;
971 }
972 if (!--cLeft)
973 break;
974 }
975 }
976}
977
978
979/**
980 * @callback_method_impl{FNDBGFHANDLERINT, pgmpoolpages}
981 */
982static DECLCALLBACK(void) pgmR3PoolInfoPages(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
983{
984 RT_NOREF(pszArgs);
985
986 PPGMPOOL const pPool = pVM->pgm.s.CTX_SUFF(pPool);
987 unsigned const cPages = pPool->cCurPages;
988 unsigned cLeft = pPool->cUsedPages;
989 for (unsigned iPage = 0; iPage < cPages; iPage++)
990 {
991 PGMPOOLPAGE volatile const *pPage = (PGMPOOLPAGE volatile const *)&pPool->aPages[iPage];
992 RTGCPHYS const GCPhys = pPage->GCPhys;
993 uint8_t const enmKind = pPage->enmKind;
994 if ( enmKind != PGMPOOLKIND_INVALID
995 && enmKind != PGMPOOLKIND_FREE)
996 {
997 pHlp->pfnPrintf(pHlp, "#%04x: HCPhys=%RHp GCPhys=%RGp %s %s %s%s%s\n",
998 iPage, pPage->Core.Key, GCPhys, pPage->fA20Enabled ? "A20 " : "!A20",
999 pgmPoolPoolKindToStr(enmKind),
1000 pPage->enmAccess == PGMPOOLACCESS_DONTCARE ? "" : pgmPoolPoolAccessToStr(pPage->enmAccess),
1001 pPage->fCached ? " cached" : "", pPage->fMonitored ? " monitored" : "");
1002 if (!--cLeft)
1003 break;
1004 }
1005 }
1006}
1007
1008
1009/**
1010 * @callback_method_impl{FNDBGFHANDLERINT, pgmpoolroots}
1011 */
1012static DECLCALLBACK(void) pgmR3PoolInfoRoots(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1013{
1014 RT_NOREF(pszArgs);
1015
1016 PPGMPOOL const pPool = pVM->pgm.s.CTX_SUFF(pPool);
1017 unsigned const cPages = pPool->cCurPages;
1018 unsigned cLeft = pPool->cUsedPages;
1019 for (unsigned iPage = 0; iPage < cPages; iPage++)
1020 {
1021 PGMPOOLPAGE volatile const *pPage = (PGMPOOLPAGE volatile const *)&pPool->aPages[iPage];
1022 RTGCPHYS const GCPhys = pPage->GCPhys;
1023 if (GCPhys != NIL_RTGCPHYS)
1024 {
1025 uint8_t const enmKind = pPage->enmKind;
1026 switch (enmKind)
1027 {
1028 default:
1029 break;
1030
1031 case PGMPOOLKIND_PAE_PDPT_FOR_32BIT:
1032 case PGMPOOLKIND_PAE_PDPT:
1033 case PGMPOOLKIND_PAE_PDPT_PHYS:
1034 case PGMPOOLKIND_64BIT_PML4:
1035 case PGMPOOLKIND_ROOT_NESTED:
1036 case PGMPOOLKIND_EPT_PML4_FOR_EPT_PML4:
1037 pHlp->pfnPrintf(pHlp, "#%04x: HCPhys=%RHp GCPhys=%RGp %s %s %s\n",
1038 iPage, pPage->Core.Key, GCPhys, pPage->fA20Enabled ? "A20 " : "!A20",
1039 pgmPoolPoolKindToStr(enmKind), pPage->fMonitored ? " monitored" : "");
1040 break;
1041 }
1042 if (!--cLeft)
1043 break;
1044 }
1045 }
1046}
1047
1048#ifdef VBOX_WITH_DEBUGGER
1049
1050/**
1051 * Helper for pgmR3PoolCmdCheck that reports an error.
1052 */
1053static void pgmR3PoolCheckError(PPGMPOOLCHECKERSTATE pState, const char *pszFormat, ...)
1054{
1055 if (pState->fFirstMsg)
1056 {
1057 DBGCCmdHlpPrintf(pState->pCmdHlp, "Checking pool page #%i for %RGp %s\n",
1058 pState->pPage->idx, pState->pPage->GCPhys, pgmPoolPoolKindToStr(pState->pPage->enmKind));
1059 pState->fFirstMsg = false;
1060 }
1061
1062 ++pState->cErrors;
1063
1064 va_list va;
1065 va_start(va, pszFormat);
1066 pState->pCmdHlp->pfnPrintfV(pState->pCmdHlp, NULL, pszFormat, va);
1067 va_end(va);
1068}
1069
1070
1071/**
1072 * @callback_method_impl{FNDBGCCMD, The '.pgmpoolcheck' command.}
1073 */
1074static DECLCALLBACK(int) pgmR3PoolCmdCheck(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
1075{
1076 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
1077 PVM pVM = pUVM->pVM;
1078 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
1079 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, -1, cArgs == 0);
1080 NOREF(paArgs);
1081
1082 PGM_LOCK_VOID(pVM);
1083 PPGMPOOL pPool = pVM->pgm.s.CTX_SUFF(pPool);
1084 PGMPOOLCHECKERSTATE State = { pCmdHlp, pVM, pPool, NULL, true, 0 };
1085 for (unsigned i = 0; i < pPool->cCurPages; i++)
1086 {
1087 PPGMPOOLPAGE pPage = &pPool->aPages[i];
1088 State.pPage = pPage;
1089 State.fFirstMsg = true;
1090
1091 if (pPage->idx != i)
1092 pgmR3PoolCheckError(&State, "Invalid idx value: %#x, expected %#x", pPage->idx, i);
1093
1094 if (pPage->enmKind == PGMPOOLKIND_FREE)
1095 continue;
1096 if (pPage->enmKind > PGMPOOLKIND_LAST || pPage->enmKind <= PGMPOOLKIND_INVALID)
1097 {
1098 if (pPage->enmKind != PGMPOOLKIND_INVALID || pPage->idx != 0)
1099 pgmR3PoolCheckError(&State, "Invalid enmKind value: %#x\n", pPage->enmKind);
1100 continue;
1101 }
1102
1103 void const *pvGuestPage = NULL;
1104 PGMPAGEMAPLOCK LockPage;
1105 if ( pPage->enmKind != PGMPOOLKIND_EPT_PDPT_FOR_PHYS
1106 && pPage->enmKind != PGMPOOLKIND_EPT_PD_FOR_PHYS
1107 && pPage->enmKind != PGMPOOLKIND_EPT_PT_FOR_PHYS
1108 && pPage->enmKind != PGMPOOLKIND_ROOT_NESTED)
1109 {
1110 int rc = PGMPhysGCPhys2CCPtrReadOnly(pVM, pPage->GCPhys, &pvGuestPage, &LockPage);
1111 if (RT_FAILURE(rc))
1112 {
1113 pgmR3PoolCheckError(&State, "PGMPhysGCPhys2CCPtrReadOnly failed for %RGp: %Rrc\n", pPage->GCPhys, rc);
1114 continue;
1115 }
1116 }
1117# define HCPHYS_TO_POOL_PAGE(a_HCPhys) (PPGMPOOLPAGE)RTAvloHCPhysGet(&pPool->HCPhysTree, (a_HCPhys))
1118
1119 /*
1120 * Check if something obvious is out of sync.
1121 */
1122 switch (pPage->enmKind)
1123 {
1124 case PGMPOOLKIND_PAE_PT_FOR_PAE_PT:
1125 {
1126 PCPGMSHWPTPAE const pShwPT = (PCPGMSHWPTPAE)PGMPOOL_PAGE_2_PTR(pPool->CTX_SUFF(pVM), pPage);
1127 PCX86PDPAE const pGstPT = (PCX86PDPAE)pvGuestPage;
1128 for (unsigned j = 0; j < RT_ELEMENTS(pShwPT->a); j++)
1129 if (PGMSHWPTEPAE_IS_P(pShwPT->a[j]))
1130 {
1131 RTHCPHYS HCPhys = NIL_RTHCPHYS;
1132 int rc = PGMPhysGCPhys2HCPhys(pPool->CTX_SUFF(pVM), pGstPT->a[j].u & X86_PTE_PAE_PG_MASK, &HCPhys);
1133 if ( rc != VINF_SUCCESS
1134 || PGMSHWPTEPAE_GET_HCPHYS(pShwPT->a[j]) != HCPhys)
1135 pgmR3PoolCheckError(&State, "Mismatch HCPhys: rc=%Rrc idx=%#x guest %RX64 shw=%RX64 vs %RHp\n",
1136 rc, j, pGstPT->a[j].u, PGMSHWPTEPAE_GET_LOG(pShwPT->a[j]), HCPhys);
1137 else if ( PGMSHWPTEPAE_IS_RW(pShwPT->a[j])
1138 && !(pGstPT->a[j].u & X86_PTE_RW))
1139 pgmR3PoolCheckError(&State, "Mismatch r/w gst/shw: idx=%#x guest %RX64 shw=%RX64 vs %RHp\n",
1140 j, pGstPT->a[j].u, PGMSHWPTEPAE_GET_LOG(pShwPT->a[j]), HCPhys);
1141 }
1142 break;
1143 }
1144
1145 case PGMPOOLKIND_EPT_PT_FOR_EPT_PT:
1146 {
1147 PCEPTPT const pShwPT = (PCEPTPT)PGMPOOL_PAGE_2_PTR(pPool->CTX_SUFF(pVM), pPage);
1148 PCEPTPT const pGstPT = (PCEPTPT)pvGuestPage;
1149 for (unsigned j = 0; j < RT_ELEMENTS(pShwPT->a); j++)
1150 {
1151 uint64_t const uShw = pShwPT->a[j].u;
1152 if (uShw & EPT_PRESENT_MASK)
1153 {
1154 uint64_t const uGst = pGstPT->a[j].u;
1155 RTHCPHYS HCPhys = NIL_RTHCPHYS;
1156 int rc = PGMPhysGCPhys2HCPhys(pPool->CTX_SUFF(pVM), uGst & EPT_E_PG_MASK, &HCPhys);
1157 if ( rc != VINF_SUCCESS
1158 || (uShw & EPT_E_PG_MASK) != HCPhys)
1159 pgmR3PoolCheckError(&State, "Mismatch HCPhys: rc=%Rrc idx=%#x guest %RX64 shw=%RX64 vs %RHp\n",
1160 rc, j, uGst, uShw, HCPhys);
1161 if ( (uShw & (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE))
1162 != (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE)
1163 && ( ((uShw & EPT_E_READ) && !(uGst & EPT_E_READ))
1164 || ((uShw & EPT_E_WRITE) && !(uGst & EPT_E_WRITE))
1165 || ((uShw & EPT_E_EXECUTE) && !(uGst & EPT_E_EXECUTE)) ) )
1166 pgmR3PoolCheckError(&State, "Mismatch r/w/x: idx=%#x guest %RX64 shw=%RX64\n", j, uGst, uShw);
1167 }
1168 }
1169 break;
1170 }
1171
1172 case PGMPOOLKIND_EPT_PT_FOR_EPT_2MB:
1173 {
1174 PCEPTPT const pShwPT = (PCEPTPT)PGMPOOL_PAGE_2_PTR(pPool->CTX_SUFF(pVM), pPage);
1175 for (unsigned j = 0; j < RT_ELEMENTS(pShwPT->a); j++)
1176 {
1177 uint64_t const uShw = pShwPT->a[j].u;
1178 if (uShw & EPT_E_LEAF)
1179 pgmR3PoolCheckError(&State, "Leafness-error: idx=%#x shw=%RX64 (2MB)\n", j, uShw);
1180 else if (uShw & EPT_PRESENT_MASK)
1181 {
1182 RTGCPHYS const GCPhysSubPage = pPage->GCPhys | (j << PAGE_SHIFT);
1183 RTHCPHYS HCPhys = NIL_RTHCPHYS;
1184 int rc = PGMPhysGCPhys2HCPhys(pPool->CTX_SUFF(pVM), GCPhysSubPage, &HCPhys);
1185 if ( rc != VINF_SUCCESS
1186 || (uShw & EPT_E_PG_MASK) != HCPhys)
1187 pgmR3PoolCheckError(&State, "Mismatch HCPhys: rc=%Rrc idx=%#x guest %RX64 shw=%RX64 vs %RHp\n",
1188 rc, j, GCPhysSubPage, uShw, HCPhys);
1189 }
1190 }
1191 break;
1192 }
1193
1194 case PGMPOOLKIND_EPT_PD_FOR_EPT_PD:
1195 {
1196 PCEPTPD const pShwPD = (PCEPTPD)PGMPOOL_PAGE_2_PTR(pPool->CTX_SUFF(pVM), pPage);
1197 PCEPTPD const pGstPD = (PCEPTPD)pvGuestPage;
1198 for (unsigned j = 0; j < RT_ELEMENTS(pShwPD->a); j++)
1199 {
1200 uint64_t const uShw = pShwPD->a[j].u;
1201 if (uShw & EPT_PRESENT_MASK)
1202 {
1203 uint64_t const uGst = pGstPD->a[j].u;
1204 if (uShw & EPT_E_LEAF)
1205 {
1206 if (!(uGst & EPT_E_LEAF))
1207 pgmR3PoolCheckError(&State, "Leafness-mismatch: idx=%#x guest %RX64 shw=%RX64\n", j, uGst, uShw);
1208 else
1209 {
1210 RTHCPHYS HCPhys = NIL_RTHCPHYS;
1211 int rc = PGMPhysGCPhys2HCPhys(pPool->CTX_SUFF(pVM), uGst & EPT_PDE2M_PG_MASK, &HCPhys);
1212 if ( rc != VINF_SUCCESS
1213 || (uShw & EPT_E_PG_MASK) != HCPhys)
1214 pgmR3PoolCheckError(&State, "Mismatch HCPhys: rc=%Rrc idx=%#x guest %RX64 shw=%RX64 vs %RHp (2MB)\n",
1215 rc, j, uGst, uShw, HCPhys);
1216 }
1217 }
1218 else
1219 {
1220 PPGMPOOLPAGE pSubPage = HCPHYS_TO_POOL_PAGE(uShw & EPT_E_PG_MASK);
1221 if (pSubPage)
1222 {
1223 if ( pSubPage->enmKind != PGMPOOLKIND_EPT_PT_FOR_EPT_PT
1224 && pSubPage->enmKind != PGMPOOLKIND_EPT_PT_FOR_EPT_2MB)
1225 pgmR3PoolCheckError(&State, "Wrong sub-table type: idx=%#x guest %RX64 shw=%RX64: idxSub=%#x %s\n",
1226 j, uGst, uShw, pSubPage->idx, pgmPoolPoolKindToStr(pSubPage->enmKind));
1227 if (pSubPage->fA20Enabled != pPage->fA20Enabled)
1228 pgmR3PoolCheckError(&State, "Wrong sub-table A20: idx=%#x guest %RX64 shw=%RX64: idxSub=%#x A20=%d, expected %d\n",
1229 j, uGst, uShw, pSubPage->idx, pSubPage->fA20Enabled, pPage->fA20Enabled);
1230 if (pSubPage->GCPhys != (uGst & EPT_E_PG_MASK))
1231 pgmR3PoolCheckError(&State, "Wrong sub-table GCPhys: idx=%#x guest %RX64 shw=%RX64: GCPhys=%#RGp idxSub=%#x\n",
1232 j, uGst, uShw, pSubPage->GCPhys, pSubPage->idx);
1233 }
1234 else
1235 pgmR3PoolCheckError(&State, "sub table not found: idx=%#x shw=%RX64\n", j, uShw);
1236 }
1237 if ( (uShw & (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE))
1238 != (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE)
1239 && ( ((uShw & EPT_E_READ) && !(uGst & EPT_E_READ))
1240 || ((uShw & EPT_E_WRITE) && !(uGst & EPT_E_WRITE))
1241 || ((uShw & EPT_E_EXECUTE) && !(uGst & EPT_E_EXECUTE)) ) )
1242 pgmR3PoolCheckError(&State, "Mismatch r/w/x: idx=%#x guest %RX64 shw=%RX64\n",
1243 j, uGst, uShw);
1244 }
1245 }
1246 break;
1247 }
1248
1249 case PGMPOOLKIND_EPT_PDPT_FOR_EPT_PDPT:
1250 {
1251 PCEPTPDPT const pShwPDPT = (PCEPTPDPT)PGMPOOL_PAGE_2_PTR(pPool->CTX_SUFF(pVM), pPage);
1252 PCEPTPDPT const pGstPDPT = (PCEPTPDPT)pvGuestPage;
1253 for (unsigned j = 0; j < RT_ELEMENTS(pShwPDPT->a); j++)
1254 {
1255 uint64_t const uShw = pShwPDPT->a[j].u;
1256 if (uShw & EPT_PRESENT_MASK)
1257 {
1258 uint64_t const uGst = pGstPDPT->a[j].u;
1259 if (uShw & EPT_E_LEAF)
1260 pgmR3PoolCheckError(&State, "No 1GiB shadow pages: idx=%#x guest %RX64 shw=%RX64\n", j, uGst, uShw);
1261 else
1262 {
1263 PPGMPOOLPAGE pSubPage = HCPHYS_TO_POOL_PAGE(uShw & EPT_E_PG_MASK);
1264 if (pSubPage)
1265 {
1266 if (pSubPage->enmKind != PGMPOOLKIND_EPT_PD_FOR_EPT_PD)
1267 pgmR3PoolCheckError(&State, "Wrong sub-table type: idx=%#x guest %RX64 shw=%RX64: idxSub=%#x %s\n",
1268 j, uGst, uShw, pSubPage->idx, pgmPoolPoolKindToStr(pSubPage->enmKind));
1269 if (pSubPage->fA20Enabled != pPage->fA20Enabled)
1270 pgmR3PoolCheckError(&State, "Wrong sub-table A20: idx=%#x guest %RX64 shw=%RX64: idxSub=%#x A20=%d, expected %d\n",
1271 j, uGst, uShw, pSubPage->idx, pSubPage->fA20Enabled, pPage->fA20Enabled);
1272 if (pSubPage->GCPhys != (uGst & EPT_E_PG_MASK))
1273 pgmR3PoolCheckError(&State, "Wrong sub-table GCPhys: idx=%#x guest %RX64 shw=%RX64: GCPhys=%#RGp idxSub=%#x\n",
1274 j, uGst, uShw, pSubPage->GCPhys, pSubPage->idx);
1275 }
1276 else
1277 pgmR3PoolCheckError(&State, "sub table not found: idx=%#x shw=%RX64\n", j, uShw);
1278
1279 }
1280 if ( (uShw & (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE))
1281 != (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE)
1282 && ( ((uShw & EPT_E_READ) && !(uGst & EPT_E_READ))
1283 || ((uShw & EPT_E_WRITE) && !(uGst & EPT_E_WRITE))
1284 || ((uShw & EPT_E_EXECUTE) && !(uGst & EPT_E_EXECUTE)) ) )
1285 pgmR3PoolCheckError(&State, "Mismatch r/w/x: idx=%#x guest %RX64 shw=%RX64\n",
1286 j, uGst, uShw);
1287 }
1288 }
1289 break;
1290 }
1291
1292 case PGMPOOLKIND_EPT_PML4_FOR_EPT_PML4:
1293 {
1294 PCEPTPML4 const pShwPML4 = (PCEPTPML4)PGMPOOL_PAGE_2_PTR(pPool->CTX_SUFF(pVM), pPage);
1295 PCEPTPML4 const pGstPML4 = (PCEPTPML4)pvGuestPage;
1296 for (unsigned j = 0; j < RT_ELEMENTS(pShwPML4->a); j++)
1297 {
1298 uint64_t const uShw = pShwPML4->a[j].u;
1299 if (uShw & EPT_PRESENT_MASK)
1300 {
1301 uint64_t const uGst = pGstPML4->a[j].u;
1302 if (uShw & EPT_E_LEAF)
1303 pgmR3PoolCheckError(&State, "No 0.5TiB shadow pages: idx=%#x guest %RX64 shw=%RX64\n", j, uGst, uShw);
1304 else
1305 {
1306 PPGMPOOLPAGE pSubPage = HCPHYS_TO_POOL_PAGE(uShw & EPT_E_PG_MASK);
1307 if (pSubPage)
1308 {
1309 if (pSubPage->enmKind != PGMPOOLKIND_EPT_PDPT_FOR_EPT_PDPT)
1310 pgmR3PoolCheckError(&State, "Wrong sub-table type: idx=%#x guest %RX64 shw=%RX64: idxSub=%#x %s\n",
1311 j, uGst, uShw, pSubPage->idx, pgmPoolPoolKindToStr(pSubPage->enmKind));
1312 if (pSubPage->fA20Enabled != pPage->fA20Enabled)
1313 pgmR3PoolCheckError(&State, "Wrong sub-table A20: idx=%#x guest %RX64 shw=%RX64: idxSub=%#x A20=%d, expected %d\n",
1314 j, uGst, uShw, pSubPage->idx, pSubPage->fA20Enabled, pPage->fA20Enabled);
1315 if (pSubPage->GCPhys != (uGst & EPT_E_PG_MASK))
1316 pgmR3PoolCheckError(&State, "Wrong sub-table GCPhys: idx=%#x guest %RX64 shw=%RX64: GCPhys=%#RGp idxSub=%#x\n",
1317 j, uGst, uShw, pSubPage->GCPhys, pSubPage->idx);
1318 }
1319 else
1320 pgmR3PoolCheckError(&State, "sub table not found: idx=%#x shw=%RX64\n", j, uShw);
1321
1322 }
1323 if ( (uShw & (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE))
1324 != (EPT_E_READ | EPT_E_WRITE | EPT_E_EXECUTE)
1325 && ( ((uShw & EPT_E_READ) && !(uGst & EPT_E_READ))
1326 || ((uShw & EPT_E_WRITE) && !(uGst & EPT_E_WRITE))
1327 || ((uShw & EPT_E_EXECUTE) && !(uGst & EPT_E_EXECUTE)) ) )
1328 pgmR3PoolCheckError(&State, "Mismatch r/w/x: idx=%#x guest %RX64 shw=%RX64\n",
1329 j, uGst, uShw);
1330 }
1331 }
1332 break;
1333 }
1334 }
1335
1336#undef HCPHYS_TO_POOL_PAGE
1337 if (pvGuestPage)
1338 PGMPhysReleasePageMappingLock(pVM, &LockPage);
1339 }
1340 PGM_UNLOCK(pVM);
1341
1342 if (State.cErrors > 0)
1343 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Found %u error(s)", State.cErrors);
1344 DBGCCmdHlpPrintf(pCmdHlp, "No errors found\n");
1345 return VINF_SUCCESS;
1346}
1347
1348#endif /* VBOX_WITH_DEBUGGER */
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette