VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR0/GMMR0.cpp@ 19420

Last change on this file since 19420 was 19381, checked in by vboxsync, 15 years ago

Further breakup of GVM. Deal with VCPU thread handles.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 104.1 KB
Line 
1/* $Id: GMMR0.cpp 19381 2009-05-05 14:44:43Z vboxsync $ */
2/** @file
3 * GMM - Global Memory Manager.
4 */
5
6/*
7 * Copyright (C) 2007 Sun Microsystems, Inc.
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 *
17 * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
18 * Clara, CA 95054 USA or visit http://www.sun.com if you need
19 * additional information or have any questions.
20 */
21
22
23/** @page pg_gmm GMM - The Global Memory Manager
24 *
25 * As the name indicates, this component is responsible for global memory
26 * management. Currently only guest RAM is allocated from the GMM, but this
27 * may change to include shadow page tables and other bits later.
28 *
29 * Guest RAM is managed as individual pages, but allocated from the host OS
30 * in chunks for reasons of portability / efficiency. To minimize the memory
31 * footprint all tracking structure must be as small as possible without
32 * unnecessary performance penalties.
33 *
34 * The allocation chunks has fixed sized, the size defined at compile time
35 * by the #GMM_CHUNK_SIZE \#define.
36 *
37 * Each chunk is given an unquie ID. Each page also has a unique ID. The
38 * relation ship between the two IDs is:
39 * @code
40 * GMM_CHUNK_SHIFT = log2(GMM_CHUNK_SIZE / PAGE_SIZE);
41 * idPage = (idChunk << GMM_CHUNK_SHIFT) | iPage;
42 * @endcode
43 * Where iPage is the index of the page within the chunk. This ID scheme
44 * permits for efficient chunk and page lookup, but it relies on the chunk size
45 * to be set at compile time. The chunks are organized in an AVL tree with their
46 * IDs being the keys.
47 *
48 * The physical address of each page in an allocation chunk is maintained by
49 * the #RTR0MEMOBJ and obtained using #RTR0MemObjGetPagePhysAddr. There is no
50 * need to duplicate this information (it'll cost 8-bytes per page if we did).
51 *
52 * So what do we need to track per page? Most importantly we need to know
53 * which state the page is in:
54 * - Private - Allocated for (eventually) backing one particular VM page.
55 * - Shared - Readonly page that is used by one or more VMs and treated
56 * as COW by PGM.
57 * - Free - Not used by anyone.
58 *
59 * For the page replacement operations (sharing, defragmenting and freeing)
60 * to be somewhat efficient, private pages needs to be associated with a
61 * particular page in a particular VM.
62 *
63 * Tracking the usage of shared pages is impractical and expensive, so we'll
64 * settle for a reference counting system instead.
65 *
66 * Free pages will be chained on LIFOs
67 *
68 * On 64-bit systems we will use a 64-bit bitfield per page, while on 32-bit
69 * systems a 32-bit bitfield will have to suffice because of address space
70 * limitations. The #GMMPAGE structure shows the details.
71 *
72 *
73 * @section sec_gmm_alloc_strat Page Allocation Strategy
74 *
75 * The strategy for allocating pages has to take fragmentation and shared
76 * pages into account, or we may end up with with 2000 chunks with only
77 * a few pages in each. Shared pages cannot easily be reallocated because
78 * of the inaccurate usage accounting (see above). Private pages can be
79 * reallocated by a defragmentation thread in the same manner that sharing
80 * is done.
81 *
82 * The first approach is to manage the free pages in two sets depending on
83 * whether they are mainly for the allocation of shared or private pages.
84 * In the initial implementation there will be almost no possibility for
85 * mixing shared and private pages in the same chunk (only if we're really
86 * stressed on memory), but when we implement forking of VMs and have to
87 * deal with lots of COW pages it'll start getting kind of interesting.
88 *
89 * The sets are lists of chunks with approximately the same number of
90 * free pages. Say the chunk size is 1MB, meaning 256 pages, and a set
91 * consists of 16 lists. So, the first list will contain the chunks with
92 * 1-7 free pages, the second covers 8-15, and so on. The chunks will be
93 * moved between the lists as pages are freed up or allocated.
94 *
95 *
96 * @section sec_gmm_costs Costs
97 *
98 * The per page cost in kernel space is 32-bit plus whatever RTR0MEMOBJ
99 * entails. In addition there is the chunk cost of approximately
100 * (sizeof(RT0MEMOBJ) + sizof(CHUNK)) / 2^CHUNK_SHIFT bytes per page.
101 *
102 * On Windows the per page #RTR0MEMOBJ cost is 32-bit on 32-bit windows
103 * and 64-bit on 64-bit windows (a PFN_NUMBER in the MDL). So, 64-bit per page.
104 * The cost on Linux is identical, but here it's because of sizeof(struct page *).
105 *
106 *
107 * @section sec_gmm_legacy Legacy Mode for Non-Tier-1 Platforms
108 *
109 * In legacy mode the page source is locked user pages and not
110 * #RTR0MemObjAllocPhysNC, this means that a page can only be allocated
111 * by the VM that locked it. We will make no attempt at implementing
112 * page sharing on these systems, just do enough to make it all work.
113 *
114 *
115 * @subsection sub_gmm_locking Serializing
116 *
117 * One simple fast mutex will be employed in the initial implementation, not
118 * two as metioned in @ref subsec_pgmPhys_Serializing.
119 *
120 * @see @ref subsec_pgmPhys_Serializing
121 *
122 *
123 * @section sec_gmm_overcommit Memory Over-Commitment Management
124 *
125 * The GVM will have to do the system wide memory over-commitment
126 * management. My current ideas are:
127 * - Per VM oc policy that indicates how much to initially commit
128 * to it and what to do in a out-of-memory situation.
129 * - Prevent overtaxing the host.
130 *
131 * There are some challenges here, the main ones are configurability and
132 * security. Should we for instance permit anyone to request 100% memory
133 * commitment? Who should be allowed to do runtime adjustments of the
134 * config. And how to prevent these settings from being lost when the last
135 * VM process exits? The solution is probably to have an optional root
136 * daemon the will keep VMMR0.r0 in memory and enable the security measures.
137 *
138 *
139 *
140 * @section sec_gmm_numa NUMA
141 *
142 * NUMA considerations will be designed and implemented a bit later.
143 *
144 * The preliminary guesses is that we will have to try allocate memory as
145 * close as possible to the CPUs the VM is executed on (EMT and additional CPU
146 * threads). Which means it's mostly about allocation and sharing policies.
147 * Both the scheduler and allocator interface will to supply some NUMA info
148 * and we'll need to have a way to calc access costs.
149 *
150 */
151
152
153/*******************************************************************************
154* Header Files *
155*******************************************************************************/
156#define LOG_GROUP LOG_GROUP_GMM
157#include <VBox/gmm.h>
158#include "GMMR0Internal.h"
159#include <VBox/gvm.h>
160#include <VBox/log.h>
161#include <VBox/param.h>
162#include <VBox/err.h>
163#include <iprt/avl.h>
164#include <iprt/mem.h>
165#include <iprt/memobj.h>
166#include <iprt/semaphore.h>
167#include <iprt/string.h>
168
169
170/*******************************************************************************
171* Structures and Typedefs *
172*******************************************************************************/
173/** Pointer to set of free chunks. */
174typedef struct GMMCHUNKFREESET *PGMMCHUNKFREESET;
175
176/** Pointer to a GMM allocation chunk. */
177typedef struct GMMCHUNK *PGMMCHUNK;
178
179/**
180 * The per-page tracking structure employed by the GMM.
181 *
182 * On 32-bit hosts we'll some trickery is necessary to compress all
183 * the information into 32-bits. When the fSharedFree member is set,
184 * the 30th bit decides whether it's a free page or not.
185 *
186 * Because of the different layout on 32-bit and 64-bit hosts, macros
187 * are used to get and set some of the data.
188 */
189typedef union GMMPAGE
190{
191#if HC_ARCH_BITS == 64
192 /** Unsigned integer view. */
193 uint64_t u;
194
195 /** The common view. */
196 struct GMMPAGECOMMON
197 {
198 uint32_t uStuff1 : 32;
199 uint32_t uStuff2 : 30;
200 /** The page state. */
201 uint32_t u2State : 2;
202 } Common;
203
204 /** The view of a private page. */
205 struct GMMPAGEPRIVATE
206 {
207 /** The guest page frame number. (Max addressable: 2 ^ 44 - 16) */
208 uint32_t pfn;
209 /** The GVM handle. (64K VMs) */
210 uint32_t hGVM : 16;
211 /** Reserved. */
212 uint32_t u16Reserved : 14;
213 /** The page state. */
214 uint32_t u2State : 2;
215 } Private;
216
217 /** The view of a shared page. */
218 struct GMMPAGESHARED
219 {
220 /** The reference count. */
221 uint32_t cRefs;
222 /** Reserved. Checksum or something? Two hGVMs for forking? */
223 uint32_t u30Reserved : 30;
224 /** The page state. */
225 uint32_t u2State : 2;
226 } Shared;
227
228 /** The view of a free page. */
229 struct GMMPAGEFREE
230 {
231 /** The index of the next page in the free list. UINT16_MAX is NIL. */
232 uint16_t iNext;
233 /** Reserved. Checksum or something? */
234 uint16_t u16Reserved0;
235 /** Reserved. Checksum or something? */
236 uint32_t u30Reserved1 : 30;
237 /** The page state. */
238 uint32_t u2State : 2;
239 } Free;
240
241#else /* 32-bit */
242 /** Unsigned integer view. */
243 uint32_t u;
244
245 /** The common view. */
246 struct GMMPAGECOMMON
247 {
248 uint32_t uStuff : 30;
249 /** The page state. */
250 uint32_t u2State : 2;
251 } Common;
252
253 /** The view of a private page. */
254 struct GMMPAGEPRIVATE
255 {
256 /** The guest page frame number. (Max addressable: 2 ^ 36) */
257 uint32_t pfn : 24;
258 /** The GVM handle. (127 VMs) */
259 uint32_t hGVM : 7;
260 /** The top page state bit, MBZ. */
261 uint32_t fZero : 1;
262 } Private;
263
264 /** The view of a shared page. */
265 struct GMMPAGESHARED
266 {
267 /** The reference count. */
268 uint32_t cRefs : 30;
269 /** The page state. */
270 uint32_t u2State : 2;
271 } Shared;
272
273 /** The view of a free page. */
274 struct GMMPAGEFREE
275 {
276 /** The index of the next page in the free list. UINT16_MAX is NIL. */
277 uint32_t iNext : 16;
278 /** Reserved. Checksum or something? */
279 uint32_t u14Reserved : 14;
280 /** The page state. */
281 uint32_t u2State : 2;
282 } Free;
283#endif
284} GMMPAGE;
285AssertCompileSize(GMMPAGE, sizeof(RTHCUINTPTR));
286/** Pointer to a GMMPAGE. */
287typedef GMMPAGE *PGMMPAGE;
288
289
290/** @name The Page States.
291 * @{ */
292/** A private page. */
293#define GMM_PAGE_STATE_PRIVATE 0
294/** A private page - alternative value used on the 32-bit implemenation.
295 * This will never be used on 64-bit hosts. */
296#define GMM_PAGE_STATE_PRIVATE_32 1
297/** A shared page. */
298#define GMM_PAGE_STATE_SHARED 2
299/** A free page. */
300#define GMM_PAGE_STATE_FREE 3
301/** @} */
302
303
304/** @def GMM_PAGE_IS_PRIVATE
305 *
306 * @returns true if private, false if not.
307 * @param pPage The GMM page.
308 */
309#if HC_ARCH_BITS == 64
310# define GMM_PAGE_IS_PRIVATE(pPage) ( (pPage)->Common.u2State == GMM_PAGE_STATE_PRIVATE )
311#else
312# define GMM_PAGE_IS_PRIVATE(pPage) ( (pPage)->Private.fZero == 0 )
313#endif
314
315/** @def GMM_PAGE_IS_SHARED
316 *
317 * @returns true if shared, false if not.
318 * @param pPage The GMM page.
319 */
320#define GMM_PAGE_IS_SHARED(pPage) ( (pPage)->Common.u2State == GMM_PAGE_STATE_SHARED )
321
322/** @def GMM_PAGE_IS_FREE
323 *
324 * @returns true if free, false if not.
325 * @param pPage The GMM page.
326 */
327#define GMM_PAGE_IS_FREE(pPage) ( (pPage)->Common.u2State == GMM_PAGE_STATE_FREE )
328
329/** @def GMM_PAGE_PFN_LAST
330 * The last valid guest pfn range.
331 * @remark Some of the values outside the range has special meaning,
332 * see GMM_PAGE_PFN_UNSHAREABLE.
333 */
334#if HC_ARCH_BITS == 64
335# define GMM_PAGE_PFN_LAST UINT32_C(0xfffffff0)
336#else
337# define GMM_PAGE_PFN_LAST UINT32_C(0x00fffff0)
338#endif
339AssertCompile(GMM_PAGE_PFN_LAST == (GMM_GCPHYS_LAST >> PAGE_SHIFT));
340
341/** @def GMM_PAGE_PFN_UNSHAREABLE
342 * Indicates that this page isn't used for normal guest memory and thus isn't shareable.
343 */
344#if HC_ARCH_BITS == 64
345# define GMM_PAGE_PFN_UNSHAREABLE UINT32_C(0xfffffff1)
346#else
347# define GMM_PAGE_PFN_UNSHAREABLE UINT32_C(0x00fffff1)
348#endif
349AssertCompile(GMM_PAGE_PFN_UNSHAREABLE == (GMM_GCPHYS_UNSHAREABLE >> PAGE_SHIFT));
350
351
352/**
353 * A GMM allocation chunk ring-3 mapping record.
354 *
355 * This should really be associated with a session and not a VM, but
356 * it's simpler to associated with a VM and cleanup with the VM object
357 * is destroyed.
358 */
359typedef struct GMMCHUNKMAP
360{
361 /** The mapping object. */
362 RTR0MEMOBJ MapObj;
363 /** The VM owning the mapping. */
364 PGVM pGVM;
365} GMMCHUNKMAP;
366/** Pointer to a GMM allocation chunk mapping. */
367typedef struct GMMCHUNKMAP *PGMMCHUNKMAP;
368
369
370/**
371 * A GMM allocation chunk.
372 */
373typedef struct GMMCHUNK
374{
375 /** The AVL node core.
376 * The Key is the chunk ID. */
377 AVLU32NODECORE Core;
378 /** The memory object.
379 * Either from RTR0MemObjAllocPhysNC or RTR0MemObjLockUser depending on
380 * what the host can dish up with. */
381 RTR0MEMOBJ MemObj;
382 /** Pointer to the next chunk in the free list. */
383 PGMMCHUNK pFreeNext;
384 /** Pointer to the previous chunk in the free list. */
385 PGMMCHUNK pFreePrev;
386 /** Pointer to the free set this chunk belongs to. NULL for
387 * chunks with no free pages. */
388 PGMMCHUNKFREESET pSet;
389 /** Pointer to an array of mappings. */
390 PGMMCHUNKMAP paMappings;
391 /** The number of mappings. */
392 uint16_t cMappings;
393 /** The head of the list of free pages. UINT16_MAX is the NIL value. */
394 uint16_t iFreeHead;
395 /** The number of free pages. */
396 uint16_t cFree;
397 /** The GVM handle of the VM that first allocated pages from this chunk, this
398 * is used as a preference when there are several chunks to choose from.
399 * When in bound memory mode this isn't a preference any longer. */
400 uint16_t hGVM;
401 /** The number of private pages. */
402 uint16_t cPrivate;
403 /** The number of shared pages. */
404 uint16_t cShared;
405#if HC_ARCH_BITS == 64
406 /** Reserved for later. */
407 uint16_t au16Reserved[2];
408#endif
409 /** The pages. */
410 GMMPAGE aPages[GMM_CHUNK_SIZE >> PAGE_SHIFT];
411} GMMCHUNK;
412
413
414/**
415 * An allocation chunk TLB entry.
416 */
417typedef struct GMMCHUNKTLBE
418{
419 /** The chunk id. */
420 uint32_t idChunk;
421 /** Pointer to the chunk. */
422 PGMMCHUNK pChunk;
423} GMMCHUNKTLBE;
424/** Pointer to an allocation chunk TLB entry. */
425typedef GMMCHUNKTLBE *PGMMCHUNKTLBE;
426
427
428/** The number of entries tin the allocation chunk TLB. */
429#define GMM_CHUNKTLB_ENTRIES 32
430/** Gets the TLB entry index for the given Chunk ID. */
431#define GMM_CHUNKTLB_IDX(idChunk) ( (idChunk) & (GMM_CHUNKTLB_ENTRIES - 1) )
432
433/**
434 * An allocation chunk TLB.
435 */
436typedef struct GMMCHUNKTLB
437{
438 /** The TLB entries. */
439 GMMCHUNKTLBE aEntries[GMM_CHUNKTLB_ENTRIES];
440} GMMCHUNKTLB;
441/** Pointer to an allocation chunk TLB. */
442typedef GMMCHUNKTLB *PGMMCHUNKTLB;
443
444
445/** The GMMCHUNK::cFree shift count. */
446#define GMM_CHUNK_FREE_SET_SHIFT 4
447/** The GMMCHUNK::cFree mask for use when considering relinking a chunk. */
448#define GMM_CHUNK_FREE_SET_MASK 15
449/** The number of lists in set. */
450#define GMM_CHUNK_FREE_SET_LISTS (GMM_CHUNK_NUM_PAGES >> GMM_CHUNK_FREE_SET_SHIFT)
451
452/**
453 * A set of free chunks.
454 */
455typedef struct GMMCHUNKFREESET
456{
457 /** The number of free pages in the set. */
458 uint64_t cPages;
459 /** Chunks ordered by increasing number of free pages. */
460 PGMMCHUNK apLists[GMM_CHUNK_FREE_SET_LISTS];
461} GMMCHUNKFREESET;
462
463
464/**
465 * The GMM instance data.
466 */
467typedef struct GMM
468{
469 /** Magic / eye catcher. GMM_MAGIC */
470 uint32_t u32Magic;
471 /** The fast mutex protecting the GMM.
472 * More fine grained locking can be implemented later if necessary. */
473 RTSEMFASTMUTEX Mtx;
474 /** The chunk tree. */
475 PAVLU32NODECORE pChunks;
476 /** The chunk TLB. */
477 GMMCHUNKTLB ChunkTLB;
478 /** The private free set. */
479 GMMCHUNKFREESET Private;
480 /** The shared free set. */
481 GMMCHUNKFREESET Shared;
482
483 /** The maximum number of pages we're allowed to allocate.
484 * @gcfgm 64-bit GMM/MaxPages Direct.
485 * @gcfgm 32-bit GMM/PctPages Relative to the number of host pages. */
486 uint64_t cMaxPages;
487 /** The number of pages that has been reserved.
488 * The deal is that cReservedPages - cOverCommittedPages <= cMaxPages. */
489 uint64_t cReservedPages;
490 /** The number of pages that we have over-committed in reservations. */
491 uint64_t cOverCommittedPages;
492 /** The number of actually allocated (committed if you like) pages. */
493 uint64_t cAllocatedPages;
494 /** The number of pages that are shared. A subset of cAllocatedPages. */
495 uint64_t cSharedPages;
496 /** The number of pages that are shared that has been left behind by
497 * VMs not doing proper cleanups. */
498 uint64_t cLeftBehindSharedPages;
499 /** The number of allocation chunks.
500 * (The number of pages we've allocated from the host can be derived from this.) */
501 uint32_t cChunks;
502 /** The number of current ballooned pages. */
503 uint64_t cBalloonedPages;
504
505 /** The legacy allocation mode indicator.
506 * This is determined at initialization time. */
507 bool fLegacyAllocationMode;
508 /** The bound memory mode indicator.
509 * When set, the memory will be bound to a specific VM and never
510 * shared. This is always set if fLegacyAllocationMode is set.
511 * (Also determined at initialization time.) */
512 bool fBoundMemoryMode;
513 /** The number of registered VMs. */
514 uint16_t cRegisteredVMs;
515
516 /** The previous allocated Chunk ID.
517 * Used as a hint to avoid scanning the whole bitmap. */
518 uint32_t idChunkPrev;
519 /** Chunk ID allocation bitmap.
520 * Bits of allocated IDs are set, free ones are clear.
521 * The NIL id (0) is marked allocated. */
522 uint32_t bmChunkId[(GMM_CHUNKID_LAST + 1 + 31) / 32];
523} GMM;
524/** Pointer to the GMM instance. */
525typedef GMM *PGMM;
526
527/** The value of GMM::u32Magic (Katsuhiro Otomo). */
528#define GMM_MAGIC 0x19540414
529
530
531/*******************************************************************************
532* Global Variables *
533*******************************************************************************/
534/** Pointer to the GMM instance data. */
535static PGMM g_pGMM = NULL;
536
537/** Macro for obtaining and validating the g_pGMM pointer.
538 * On failure it will return from the invoking function with the specified return value.
539 *
540 * @param pGMM The name of the pGMM variable.
541 * @param rc The return value on failure. Use VERR_INTERNAL_ERROR for
542 * VBox status codes.
543 */
544#define GMM_GET_VALID_INSTANCE(pGMM, rc) \
545 do { \
546 (pGMM) = g_pGMM; \
547 AssertPtrReturn((pGMM), (rc)); \
548 AssertMsgReturn((pGMM)->u32Magic == GMM_MAGIC, ("%p - %#x\n", (pGMM), (pGMM)->u32Magic), (rc)); \
549 } while (0)
550
551/** Macro for obtaining and validating the g_pGMM pointer, void function variant.
552 * On failure it will return from the invoking function.
553 *
554 * @param pGMM The name of the pGMM variable.
555 */
556#define GMM_GET_VALID_INSTANCE_VOID(pGMM) \
557 do { \
558 (pGMM) = g_pGMM; \
559 AssertPtrReturnVoid((pGMM)); \
560 AssertMsgReturnVoid((pGMM)->u32Magic == GMM_MAGIC, ("%p - %#x\n", (pGMM), (pGMM)->u32Magic)); \
561 } while (0)
562
563
564/*******************************************************************************
565* Internal Functions *
566*******************************************************************************/
567static DECLCALLBACK(int) gmmR0TermDestroyChunk(PAVLU32NODECORE pNode, void *pvGMM);
568static DECLCALLBACK(int) gmmR0CleanupVMScanChunk(PAVLU32NODECORE pNode, void *pvGMM);
569/*static*/ DECLCALLBACK(int) gmmR0CleanupVMDestroyChunk(PAVLU32NODECORE pNode, void *pvGVM);
570DECLINLINE(void) gmmR0LinkChunk(PGMMCHUNK pChunk, PGMMCHUNKFREESET pSet);
571DECLINLINE(void) gmmR0UnlinkChunk(PGMMCHUNK pChunk);
572static void gmmR0FreeChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk);
573static void gmmR0FreeSharedPage(PGMM pGMM, uint32_t idPage, PGMMPAGE pPage);
574static int gmmR0UnmapChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk);
575
576
577
578/**
579 * Initializes the GMM component.
580 *
581 * This is called when the VMMR0.r0 module is loaded and protected by the
582 * loader semaphore.
583 *
584 * @returns VBox status code.
585 */
586GMMR0DECL(int) GMMR0Init(void)
587{
588 LogFlow(("GMMInit:\n"));
589
590 /*
591 * Allocate the instance data and the lock(s).
592 */
593 PGMM pGMM = (PGMM)RTMemAllocZ(sizeof(*pGMM));
594 if (!pGMM)
595 return VERR_NO_MEMORY;
596 pGMM->u32Magic = GMM_MAGIC;
597 for (unsigned i = 0; i < RT_ELEMENTS(pGMM->ChunkTLB.aEntries); i++)
598 pGMM->ChunkTLB.aEntries[i].idChunk = NIL_GMM_CHUNKID;
599 ASMBitSet(&pGMM->bmChunkId[0], NIL_GMM_CHUNKID);
600
601 int rc = RTSemFastMutexCreate(&pGMM->Mtx);
602 if (RT_SUCCESS(rc))
603 {
604 /*
605 * Check and see if RTR0MemObjAllocPhysNC works.
606 */
607#if 0 /* later, see #3170. */
608 RTR0MEMOBJ MemObj;
609 rc = RTR0MemObjAllocPhysNC(&MemObj, _64K, NIL_RTHCPHYS);
610 if (RT_SUCCESS(rc))
611 {
612 rc = RTR0MemObjFree(MemObj, true);
613 AssertRC(rc);
614 }
615 else if (rc == VERR_NOT_SUPPORTED)
616 pGMM->fLegacyAllocationMode = pGMM->fBoundMemoryMode = true;
617 else
618 SUPR0Printf("GMMR0Init: RTR0MemObjAllocPhysNC(,64K,Any) -> %d!\n", rc);
619#else
620# ifdef RT_OS_WINDOWS
621 pGMM->fLegacyAllocationMode = false;
622# else
623 pGMM->fLegacyAllocationMode = true;
624# endif
625 pGMM->fBoundMemoryMode = true;
626#endif
627
628 /*
629 * Query system page count and guess a reasonable cMaxPages value.
630 */
631 pGMM->cMaxPages = UINT32_MAX; /** @todo IPRT function for query ram size and such. */
632
633 g_pGMM = pGMM;
634 LogFlow(("GMMInit: pGMM=%p fLegacyAllocationMode=%RTbool fBoundMemoryMode=%RTbool\n", pGMM, pGMM->fLegacyAllocationMode, pGMM->fBoundMemoryMode));
635 return VINF_SUCCESS;
636 }
637
638 RTMemFree(pGMM);
639 SUPR0Printf("GMMR0Init: failed! rc=%d\n", rc);
640 return rc;
641}
642
643
644/**
645 * Terminates the GMM component.
646 */
647GMMR0DECL(void) GMMR0Term(void)
648{
649 LogFlow(("GMMTerm:\n"));
650
651 /*
652 * Take care / be paranoid...
653 */
654 PGMM pGMM = g_pGMM;
655 if (!VALID_PTR(pGMM))
656 return;
657 if (pGMM->u32Magic != GMM_MAGIC)
658 {
659 SUPR0Printf("GMMR0Term: u32Magic=%#x\n", pGMM->u32Magic);
660 return;
661 }
662
663 /*
664 * Undo what init did and free all the resources we've acquired.
665 */
666 /* Destroy the fundamentals. */
667 g_pGMM = NULL;
668 pGMM->u32Magic++;
669 RTSemFastMutexDestroy(pGMM->Mtx);
670 pGMM->Mtx = NIL_RTSEMFASTMUTEX;
671
672 /* free any chunks still hanging around. */
673 RTAvlU32Destroy(&pGMM->pChunks, gmmR0TermDestroyChunk, pGMM);
674
675 /* finally the instance data itself. */
676 RTMemFree(pGMM);
677 LogFlow(("GMMTerm: done\n"));
678}
679
680
681/**
682 * RTAvlU32Destroy callback.
683 *
684 * @returns 0
685 * @param pNode The node to destroy.
686 * @param pvGMM The GMM handle.
687 */
688static DECLCALLBACK(int) gmmR0TermDestroyChunk(PAVLU32NODECORE pNode, void *pvGMM)
689{
690 PGMMCHUNK pChunk = (PGMMCHUNK)pNode;
691
692 if (pChunk->cFree != (GMM_CHUNK_SIZE >> PAGE_SHIFT))
693 SUPR0Printf("GMMR0Term: %p/%#x: cFree=%d cPrivate=%d cShared=%d cMappings=%d\n", pChunk,
694 pChunk->Core.Key, pChunk->cFree, pChunk->cPrivate, pChunk->cShared, pChunk->cMappings);
695
696 int rc = RTR0MemObjFree(pChunk->MemObj, true /* fFreeMappings */);
697 if (RT_FAILURE(rc))
698 {
699 SUPR0Printf("GMMR0Term: %p/%#x: RTRMemObjFree(%p,true) -> %d (cMappings=%d)\n", pChunk,
700 pChunk->Core.Key, pChunk->MemObj, rc, pChunk->cMappings);
701 AssertRC(rc);
702 }
703 pChunk->MemObj = NIL_RTR0MEMOBJ;
704
705 RTMemFree(pChunk->paMappings);
706 pChunk->paMappings = NULL;
707
708 RTMemFree(pChunk);
709 NOREF(pvGMM);
710 return 0;
711}
712
713
714/**
715 * Initializes the per-VM data for the GMM.
716 *
717 * This is called from within the GVMM lock (from GVMMR0CreateVM)
718 * and should only initialize the data members so GMMR0CleanupVM
719 * can deal with them. We reserve no memory or anything here,
720 * that's done later in GMMR0InitVM.
721 *
722 * @param pGVM Pointer to the Global VM structure.
723 */
724GMMR0DECL(void) GMMR0InitPerVMData(PGVM pGVM)
725{
726 AssertCompile(RT_SIZEOFMEMB(GVM,gmm.s) <= RT_SIZEOFMEMB(GVM,gmm.padding));
727
728 pGVM->gmm.s.enmPolicy = GMMOCPOLICY_INVALID;
729 pGVM->gmm.s.enmPriority = GMMPRIORITY_INVALID;
730 pGVM->gmm.s.fMayAllocate = false;
731}
732
733
734/**
735 * Cleans up when a VM is terminating.
736 *
737 * @param pGVM Pointer to the Global VM structure.
738 */
739GMMR0DECL(void) GMMR0CleanupVM(PGVM pGVM)
740{
741 LogFlow(("GMMR0CleanupVM: pGVM=%p:{.pVM=%p, .hSelf=%#x}\n", pGVM, pGVM->pVM, pGVM->hSelf));
742
743 PGMM pGMM;
744 GMM_GET_VALID_INSTANCE_VOID(pGMM);
745
746 int rc = RTSemFastMutexRequest(pGMM->Mtx);
747 AssertRC(rc);
748
749 /*
750 * The policy is 'INVALID' until the initial reservation
751 * request has been serviced.
752 */
753 if ( pGVM->gmm.s.enmPolicy > GMMOCPOLICY_INVALID
754 && pGVM->gmm.s.enmPolicy < GMMOCPOLICY_END)
755 {
756 /*
757 * If it's the last VM around, we can skip walking all the chunk looking
758 * for the pages owned by this VM and instead flush the whole shebang.
759 *
760 * This takes care of the eventuality that a VM has left shared page
761 * references behind (shouldn't happen of course, but you never know).
762 */
763 Assert(pGMM->cRegisteredVMs);
764 pGMM->cRegisteredVMs--;
765#if 0 /* disabled so it won't hide bugs. */
766 if (!pGMM->cRegisteredVMs)
767 {
768 RTAvlU32Destroy(&pGMM->pChunks, gmmR0CleanupVMDestroyChunk, pGMM);
769
770 for (unsigned i = 0; i < RT_ELEMENTS(pGMM->ChunkTLB.aEntries); i++)
771 {
772 pGMM->ChunkTLB.aEntries[i].idChunk = NIL_GMM_CHUNKID;
773 pGMM->ChunkTLB.aEntries[i].pChunk = NULL;
774 }
775
776 memset(&pGMM->Private, 0, sizeof(pGMM->Private));
777 memset(&pGMM->Shared, 0, sizeof(pGMM->Shared));
778
779 memset(&pGMM->bmChunkId[0], 0, sizeof(pGMM->bmChunkId));
780 ASMBitSet(&pGMM->bmChunkId[0], NIL_GMM_CHUNKID);
781
782 pGMM->cReservedPages = 0;
783 pGMM->cOverCommittedPages = 0;
784 pGMM->cAllocatedPages = 0;
785 pGMM->cSharedPages = 0;
786 pGMM->cLeftBehindSharedPages = 0;
787 pGMM->cChunks = 0;
788 pGMM->cBalloonedPages = 0;
789 }
790 else
791#endif
792 {
793 /*
794 * Walk the entire pool looking for pages that belongs to this VM
795 * and left over mappings. (This'll only catch private pages, shared
796 * pages will be 'left behind'.)
797 */
798 uint64_t cPrivatePages = pGVM->gmm.s.cPrivatePages; /* save */
799 RTAvlU32DoWithAll(&pGMM->pChunks, true /* fFromLeft */, gmmR0CleanupVMScanChunk, pGVM);
800 if (pGVM->gmm.s.cPrivatePages)
801 SUPR0Printf("GMMR0CleanupVM: hGVM=%#x has %#x private pages that cannot be found!\n", pGVM->hSelf, pGVM->gmm.s.cPrivatePages);
802 pGMM->cAllocatedPages -= cPrivatePages;
803
804 /* free empty chunks. */
805 if (cPrivatePages)
806 {
807 PGMMCHUNK pCur = pGMM->Private.apLists[RT_ELEMENTS(pGMM->Private.apLists) - 1];
808 while (pCur)
809 {
810 PGMMCHUNK pNext = pCur->pFreeNext;
811 if ( pCur->cFree == GMM_CHUNK_NUM_PAGES
812 && ( !pGMM->fBoundMemoryMode
813 || pCur->hGVM == pGVM->hSelf))
814 gmmR0FreeChunk(pGMM, pGVM, pCur);
815 pCur = pNext;
816 }
817 }
818
819 /* account for shared pages that weren't freed. */
820 if (pGVM->gmm.s.cSharedPages)
821 {
822 Assert(pGMM->cSharedPages >= pGVM->gmm.s.cSharedPages);
823 SUPR0Printf("GMMR0CleanupVM: hGVM=%#x left %#x shared pages behind!\n", pGVM->hSelf, pGVM->gmm.s.cSharedPages);
824 pGMM->cLeftBehindSharedPages += pGVM->gmm.s.cSharedPages;
825 }
826
827 /*
828 * Update the over-commitment management statistics.
829 */
830 pGMM->cReservedPages -= pGVM->gmm.s.Reserved.cBasePages
831 + pGVM->gmm.s.Reserved.cFixedPages
832 + pGVM->gmm.s.Reserved.cShadowPages;
833 switch (pGVM->gmm.s.enmPolicy)
834 {
835 case GMMOCPOLICY_NO_OC:
836 break;
837 default:
838 /** @todo Update GMM->cOverCommittedPages */
839 break;
840 }
841 }
842 }
843
844 /* zap the GVM data. */
845 pGVM->gmm.s.enmPolicy = GMMOCPOLICY_INVALID;
846 pGVM->gmm.s.enmPriority = GMMPRIORITY_INVALID;
847 pGVM->gmm.s.fMayAllocate = false;
848
849 RTSemFastMutexRelease(pGMM->Mtx);
850
851 LogFlow(("GMMR0CleanupVM: returns\n"));
852}
853
854
855/**
856 * RTAvlU32DoWithAll callback.
857 *
858 * @returns 0
859 * @param pNode The node to search.
860 * @param pvGVM Pointer to the shared VM structure.
861 */
862static DECLCALLBACK(int) gmmR0CleanupVMScanChunk(PAVLU32NODECORE pNode, void *pvGVM)
863{
864 PGMMCHUNK pChunk = (PGMMCHUNK)pNode;
865 PGVM pGVM = (PGVM)pvGVM;
866
867 /*
868 * Look for pages belonging to the VM.
869 * (Perform some internal checks while we're scanning.)
870 */
871#ifndef VBOX_STRICT
872 if (pChunk->cFree != (GMM_CHUNK_SIZE >> PAGE_SHIFT))
873#endif
874 {
875 unsigned cPrivate = 0;
876 unsigned cShared = 0;
877 unsigned cFree = 0;
878
879 uint16_t hGVM = pGVM->hSelf;
880 unsigned iPage = (GMM_CHUNK_SIZE >> PAGE_SHIFT);
881 while (iPage-- > 0)
882 if (GMM_PAGE_IS_PRIVATE(&pChunk->aPages[iPage]))
883 {
884 if (pChunk->aPages[iPage].Private.hGVM == hGVM)
885 {
886 /*
887 * Free the page.
888 *
889 * The reason for not using gmmR0FreePrivatePage here is that we
890 * must *not* cause the chunk to be freed from under us - we're in
891 * a AVL tree walk here.
892 */
893 pChunk->aPages[iPage].u = 0;
894 pChunk->aPages[iPage].Free.iNext = pChunk->iFreeHead;
895 pChunk->aPages[iPage].Free.u2State = GMM_PAGE_STATE_FREE;
896 pChunk->iFreeHead = iPage;
897 pChunk->cPrivate--;
898 if ((pChunk->cFree & GMM_CHUNK_FREE_SET_MASK) == 0)
899 {
900 gmmR0UnlinkChunk(pChunk);
901 pChunk->cFree++;
902 gmmR0LinkChunk(pChunk, pChunk->cShared ? &g_pGMM->Shared : &g_pGMM->Private);
903 }
904 else
905 pChunk->cFree++;
906 pGVM->gmm.s.cPrivatePages--;
907 cFree++;
908 }
909 else
910 cPrivate++;
911 }
912 else if (GMM_PAGE_IS_FREE(&pChunk->aPages[iPage]))
913 cFree++;
914 else
915 cShared++;
916
917 /*
918 * Did it add up?
919 */
920 if (RT_UNLIKELY( pChunk->cFree != cFree
921 || pChunk->cPrivate != cPrivate
922 || pChunk->cShared != cShared))
923 {
924 SUPR0Printf("gmmR0CleanupVMScanChunk: Chunk %p/%#x has bogus stats - free=%d/%d private=%d/%d shared=%d/%d\n",
925 pChunk->cFree, cFree, pChunk->cPrivate, cPrivate, pChunk->cShared, cShared);
926 pChunk->cFree = cFree;
927 pChunk->cPrivate = cPrivate;
928 pChunk->cShared = cShared;
929 }
930 }
931
932 /*
933 * Look for the mapping belonging to the terminating VM.
934 */
935 for (unsigned i = 0; i < pChunk->cMappings; i++)
936 if (pChunk->paMappings[i].pGVM == pGVM)
937 {
938 RTR0MEMOBJ MemObj = pChunk->paMappings[i].MapObj;
939
940 pChunk->cMappings--;
941 if (i < pChunk->cMappings)
942 pChunk->paMappings[i] = pChunk->paMappings[pChunk->cMappings];
943 pChunk->paMappings[pChunk->cMappings].pGVM = NULL;
944 pChunk->paMappings[pChunk->cMappings].MapObj = NIL_RTR0MEMOBJ;
945
946 int rc = RTR0MemObjFree(MemObj, false /* fFreeMappings (NA) */);
947 if (RT_FAILURE(rc))
948 {
949 SUPR0Printf("gmmR0CleanupVMScanChunk: %p/%#x: mapping #%x: RTRMemObjFree(%p,false) -> %d \n",
950 pChunk, pChunk->Core.Key, i, MemObj, rc);
951 AssertRC(rc);
952 }
953 break;
954 }
955
956 /*
957 * If not in bound memory mode, we should reset the hGVM field
958 * if it has our handle in it.
959 */
960 if (pChunk->hGVM == pGVM->hSelf)
961 {
962 if (!g_pGMM->fBoundMemoryMode)
963 pChunk->hGVM = NIL_GVM_HANDLE;
964 else if (pChunk->cFree != GMM_CHUNK_NUM_PAGES)
965 {
966 SUPR0Printf("gmmR0CleanupVMScanChunk: %p/%#x: cFree=%#x - it should be 0 in bound mode!\n",
967 pChunk, pChunk->Core.Key, pChunk->cFree);
968 AssertMsgFailed(("%p/%#x: cFree=%#x - it should be 0 in bound mode!\n", pChunk, pChunk->Core.Key, pChunk->cFree));
969
970 gmmR0UnlinkChunk(pChunk);
971 pChunk->cFree = GMM_CHUNK_NUM_PAGES;
972 gmmR0LinkChunk(pChunk, pChunk->cShared ? &g_pGMM->Shared : &g_pGMM->Private);
973 }
974 }
975
976 return 0;
977}
978
979
980/**
981 * RTAvlU32Destroy callback for GMMR0CleanupVM.
982 *
983 * @returns 0
984 * @param pNode The node (allocation chunk) to destroy.
985 * @param pvGVM Pointer to the shared VM structure.
986 */
987/*static*/ DECLCALLBACK(int) gmmR0CleanupVMDestroyChunk(PAVLU32NODECORE pNode, void *pvGVM)
988{
989 PGMMCHUNK pChunk = (PGMMCHUNK)pNode;
990 PGVM pGVM = (PGVM)pvGVM;
991
992 for (unsigned i = 0; i < pChunk->cMappings; i++)
993 {
994 if (pChunk->paMappings[i].pGVM != pGVM)
995 SUPR0Printf("gmmR0CleanupVMDestroyChunk: %p/%#x: mapping #%x: pGVM=%p exepcted %p\n", pChunk,
996 pChunk->Core.Key, i, pChunk->paMappings[i].pGVM, pGVM);
997 int rc = RTR0MemObjFree(pChunk->paMappings[i].MapObj, false /* fFreeMappings (NA) */);
998 if (RT_FAILURE(rc))
999 {
1000 SUPR0Printf("gmmR0CleanupVMDestroyChunk: %p/%#x: mapping #%x: RTRMemObjFree(%p,false) -> %d \n", pChunk,
1001 pChunk->Core.Key, i, pChunk->paMappings[i].MapObj, rc);
1002 AssertRC(rc);
1003 }
1004 }
1005
1006 int rc = RTR0MemObjFree(pChunk->MemObj, true /* fFreeMappings */);
1007 if (RT_FAILURE(rc))
1008 {
1009 SUPR0Printf("gmmR0CleanupVMDestroyChunk: %p/%#x: RTRMemObjFree(%p,true) -> %d (cMappings=%d)\n", pChunk,
1010 pChunk->Core.Key, pChunk->MemObj, rc, pChunk->cMappings);
1011 AssertRC(rc);
1012 }
1013 pChunk->MemObj = NIL_RTR0MEMOBJ;
1014
1015 RTMemFree(pChunk->paMappings);
1016 pChunk->paMappings = NULL;
1017
1018 RTMemFree(pChunk);
1019 return 0;
1020}
1021
1022
1023/**
1024 * The initial resource reservations.
1025 *
1026 * This will make memory reservations according to policy and priority. If there isn't
1027 * sufficient resources available to sustain the VM this function will fail and all
1028 * future allocations requests will fail as well.
1029 *
1030 * These are just the initial reservations made very very early during the VM creation
1031 * process and will be adjusted later in the GMMR0UpdateReservation call after the
1032 * ring-3 init has completed.
1033 *
1034 * @returns VBox status code.
1035 * @retval VERR_GMM_MEMORY_RESERVATION_DECLINED
1036 * @retval VERR_GMM_
1037 *
1038 * @param pVM Pointer to the shared VM structure.
1039 * @param idCpu VCPU id
1040 * @param cBasePages The number of pages that may be allocated for the base RAM and ROMs.
1041 * This does not include MMIO2 and similar.
1042 * @param cShadowPages The number of pages that may be allocated for shadow pageing structures.
1043 * @param cFixedPages The number of pages that may be allocated for fixed objects like the
1044 * hyper heap, MMIO2 and similar.
1045 * @param enmPolicy The OC policy to use on this VM.
1046 * @param enmPriority The priority in an out-of-memory situation.
1047 *
1048 * @thread The creator thread / EMT.
1049 */
1050GMMR0DECL(int) GMMR0InitialReservation(PVM pVM, unsigned idCpu, uint64_t cBasePages, uint32_t cShadowPages, uint32_t cFixedPages,
1051 GMMOCPOLICY enmPolicy, GMMPRIORITY enmPriority)
1052{
1053 LogFlow(("GMMR0InitialReservation: pVM=%p cBasePages=%#llx cShadowPages=%#x cFixedPages=%#x enmPolicy=%d enmPriority=%d\n",
1054 pVM, cBasePages, cShadowPages, cFixedPages, enmPolicy, enmPriority));
1055
1056 /*
1057 * Validate, get basics and take the semaphore.
1058 */
1059 PGMM pGMM;
1060 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1061 PGVM pGVM = GVMMR0ByVM(pVM);
1062 if (RT_UNLIKELY(!pGVM))
1063 return VERR_INVALID_PARAMETER;
1064 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
1065 return VERR_NOT_OWNER;
1066
1067 AssertReturn(cBasePages, VERR_INVALID_PARAMETER);
1068 AssertReturn(cShadowPages, VERR_INVALID_PARAMETER);
1069 AssertReturn(cFixedPages, VERR_INVALID_PARAMETER);
1070 AssertReturn(enmPolicy > GMMOCPOLICY_INVALID && enmPolicy < GMMOCPOLICY_END, VERR_INVALID_PARAMETER);
1071 AssertReturn(enmPriority > GMMPRIORITY_INVALID && enmPriority < GMMPRIORITY_END, VERR_INVALID_PARAMETER);
1072
1073 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1074 AssertRC(rc);
1075
1076 if ( !pGVM->gmm.s.Reserved.cBasePages
1077 && !pGVM->gmm.s.Reserved.cFixedPages
1078 && !pGVM->gmm.s.Reserved.cShadowPages)
1079 {
1080 /*
1081 * Check if we can accomodate this.
1082 */
1083 /* ... later ... */
1084 if (RT_SUCCESS(rc))
1085 {
1086 /*
1087 * Update the records.
1088 */
1089 pGVM->gmm.s.Reserved.cBasePages = cBasePages;
1090 pGVM->gmm.s.Reserved.cFixedPages = cFixedPages;
1091 pGVM->gmm.s.Reserved.cShadowPages = cShadowPages;
1092 pGVM->gmm.s.enmPolicy = enmPolicy;
1093 pGVM->gmm.s.enmPriority = enmPriority;
1094 pGVM->gmm.s.fMayAllocate = true;
1095
1096 pGMM->cReservedPages += cBasePages + cFixedPages + cShadowPages;
1097 pGMM->cRegisteredVMs++;
1098 }
1099 }
1100 else
1101 rc = VERR_WRONG_ORDER;
1102
1103 RTSemFastMutexRelease(pGMM->Mtx);
1104 LogFlow(("GMMR0InitialReservation: returns %Rrc\n", rc));
1105 return rc;
1106}
1107
1108
1109/**
1110 * VMMR0 request wrapper for GMMR0InitialReservation.
1111 *
1112 * @returns see GMMR0InitialReservation.
1113 * @param pVM Pointer to the shared VM structure.
1114 * @param idCpu VCPU id
1115 * @param pReq The request packet.
1116 */
1117GMMR0DECL(int) GMMR0InitialReservationReq(PVM pVM, unsigned idCpu, PGMMINITIALRESERVATIONREQ pReq)
1118{
1119 /*
1120 * Validate input and pass it on.
1121 */
1122 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
1123 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
1124 AssertMsgReturn(pReq->Hdr.cbReq == sizeof(*pReq), ("%#x != %#x\n", pReq->Hdr.cbReq, sizeof(*pReq)), VERR_INVALID_PARAMETER);
1125
1126 return GMMR0InitialReservation(pVM, idCpu, pReq->cBasePages, pReq->cShadowPages, pReq->cFixedPages, pReq->enmPolicy, pReq->enmPriority);
1127}
1128
1129
1130/**
1131 * This updates the memory reservation with the additional MMIO2 and ROM pages.
1132 *
1133 * @returns VBox status code.
1134 * @retval VERR_GMM_MEMORY_RESERVATION_DECLINED
1135 *
1136 * @param pVM Pointer to the shared VM structure.
1137 * @param idCpu VCPU id
1138 * @param cBasePages The number of pages that may be allocated for the base RAM and ROMs.
1139 * This does not include MMIO2 and similar.
1140 * @param cShadowPages The number of pages that may be allocated for shadow pageing structures.
1141 * @param cFixedPages The number of pages that may be allocated for fixed objects like the
1142 * hyper heap, MMIO2 and similar.
1143 *
1144 * @thread EMT.
1145 */
1146GMMR0DECL(int) GMMR0UpdateReservation(PVM pVM, unsigned idCpu, uint64_t cBasePages, uint32_t cShadowPages, uint32_t cFixedPages)
1147{
1148 LogFlow(("GMMR0UpdateReservation: pVM=%p cBasePages=%#llx cShadowPages=%#x cFixedPages=%#x\n",
1149 pVM, cBasePages, cShadowPages, cFixedPages));
1150
1151 /*
1152 * Validate, get basics and take the semaphore.
1153 */
1154 PGMM pGMM;
1155 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1156 PGVM pGVM = GVMMR0ByVM(pVM);
1157 if (RT_UNLIKELY(!pGVM))
1158 return VERR_INVALID_PARAMETER;
1159 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
1160 return VERR_NOT_OWNER;
1161
1162 AssertReturn(cBasePages, VERR_INVALID_PARAMETER);
1163 AssertReturn(cShadowPages, VERR_INVALID_PARAMETER);
1164 AssertReturn(cFixedPages, VERR_INVALID_PARAMETER);
1165
1166 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1167 AssertRC(rc);
1168
1169 if ( pGVM->gmm.s.Reserved.cBasePages
1170 && pGVM->gmm.s.Reserved.cFixedPages
1171 && pGVM->gmm.s.Reserved.cShadowPages)
1172 {
1173 /*
1174 * Check if we can accomodate this.
1175 */
1176 /* ... later ... */
1177 if (RT_SUCCESS(rc))
1178 {
1179 /*
1180 * Update the records.
1181 */
1182 pGMM->cReservedPages -= pGVM->gmm.s.Reserved.cBasePages
1183 + pGVM->gmm.s.Reserved.cFixedPages
1184 + pGVM->gmm.s.Reserved.cShadowPages;
1185 pGMM->cReservedPages += cBasePages + cFixedPages + cShadowPages;
1186
1187 pGVM->gmm.s.Reserved.cBasePages = cBasePages;
1188 pGVM->gmm.s.Reserved.cFixedPages = cFixedPages;
1189 pGVM->gmm.s.Reserved.cShadowPages = cShadowPages;
1190 }
1191 }
1192 else
1193 rc = VERR_WRONG_ORDER;
1194
1195 RTSemFastMutexRelease(pGMM->Mtx);
1196 LogFlow(("GMMR0UpdateReservation: returns %Rrc\n", rc));
1197 return rc;
1198}
1199
1200
1201/**
1202 * VMMR0 request wrapper for GMMR0UpdateReservation.
1203 *
1204 * @returns see GMMR0UpdateReservation.
1205 * @param pVM Pointer to the shared VM structure.
1206 * @param idCpu VCPU id
1207 * @param pReq The request packet.
1208 */
1209GMMR0DECL(int) GMMR0UpdateReservationReq(PVM pVM, unsigned idCpu, PGMMUPDATERESERVATIONREQ pReq)
1210{
1211 /*
1212 * Validate input and pass it on.
1213 */
1214 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
1215 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
1216 AssertMsgReturn(pReq->Hdr.cbReq == sizeof(*pReq), ("%#x != %#x\n", pReq->Hdr.cbReq, sizeof(*pReq)), VERR_INVALID_PARAMETER);
1217
1218 return GMMR0UpdateReservation(pVM, idCpu, pReq->cBasePages, pReq->cShadowPages, pReq->cFixedPages);
1219}
1220
1221
1222/**
1223 * Looks up a chunk in the tree and fill in the TLB entry for it.
1224 *
1225 * This is not expected to fail and will bitch if it does.
1226 *
1227 * @returns Pointer to the allocation chunk, NULL if not found.
1228 * @param pGMM Pointer to the GMM instance.
1229 * @param idChunk The ID of the chunk to find.
1230 * @param pTlbe Pointer to the TLB entry.
1231 */
1232static PGMMCHUNK gmmR0GetChunkSlow(PGMM pGMM, uint32_t idChunk, PGMMCHUNKTLBE pTlbe)
1233{
1234 PGMMCHUNK pChunk = (PGMMCHUNK)RTAvlU32Get(&pGMM->pChunks, idChunk);
1235 AssertMsgReturn(pChunk, ("Chunk %#x not found!\n", idChunk), NULL);
1236 pTlbe->idChunk = idChunk;
1237 pTlbe->pChunk = pChunk;
1238 return pChunk;
1239}
1240
1241
1242/**
1243 * Finds a allocation chunk.
1244 *
1245 * This is not expected to fail and will bitch if it does.
1246 *
1247 * @returns Pointer to the allocation chunk, NULL if not found.
1248 * @param pGMM Pointer to the GMM instance.
1249 * @param idChunk The ID of the chunk to find.
1250 */
1251DECLINLINE(PGMMCHUNK) gmmR0GetChunk(PGMM pGMM, uint32_t idChunk)
1252{
1253 /*
1254 * Do a TLB lookup, branch if not in the TLB.
1255 */
1256 PGMMCHUNKTLBE pTlbe = &pGMM->ChunkTLB.aEntries[GMM_CHUNKTLB_IDX(idChunk)];
1257 if ( pTlbe->idChunk != idChunk
1258 || !pTlbe->pChunk)
1259 return gmmR0GetChunkSlow(pGMM, idChunk, pTlbe);
1260 return pTlbe->pChunk;
1261}
1262
1263
1264/**
1265 * Finds a page.
1266 *
1267 * This is not expected to fail and will bitch if it does.
1268 *
1269 * @returns Pointer to the page, NULL if not found.
1270 * @param pGMM Pointer to the GMM instance.
1271 * @param idPage The ID of the page to find.
1272 */
1273DECLINLINE(PGMMPAGE) gmmR0GetPage(PGMM pGMM, uint32_t idPage)
1274{
1275 PGMMCHUNK pChunk = gmmR0GetChunk(pGMM, idPage >> GMM_CHUNKID_SHIFT);
1276 if (RT_LIKELY(pChunk))
1277 return &pChunk->aPages[idPage & GMM_PAGEID_IDX_MASK];
1278 return NULL;
1279}
1280
1281
1282/**
1283 * Unlinks the chunk from the free list it's currently on (if any).
1284 *
1285 * @param pChunk The allocation chunk.
1286 */
1287DECLINLINE(void) gmmR0UnlinkChunk(PGMMCHUNK pChunk)
1288{
1289 PGMMCHUNKFREESET pSet = pChunk->pSet;
1290 if (RT_LIKELY(pSet))
1291 {
1292 pSet->cPages -= pChunk->cFree;
1293
1294 PGMMCHUNK pPrev = pChunk->pFreePrev;
1295 PGMMCHUNK pNext = pChunk->pFreeNext;
1296 if (pPrev)
1297 pPrev->pFreeNext = pNext;
1298 else
1299 pSet->apLists[(pChunk->cFree - 1) >> GMM_CHUNK_FREE_SET_SHIFT] = pNext;
1300 if (pNext)
1301 pNext->pFreePrev = pPrev;
1302
1303 pChunk->pSet = NULL;
1304 pChunk->pFreeNext = NULL;
1305 pChunk->pFreePrev = NULL;
1306 }
1307 else
1308 {
1309 Assert(!pChunk->pFreeNext);
1310 Assert(!pChunk->pFreePrev);
1311 Assert(!pChunk->cFree);
1312 }
1313}
1314
1315
1316/**
1317 * Links the chunk onto the appropriate free list in the specified free set.
1318 *
1319 * If no free entries, it's not linked into any list.
1320 *
1321 * @param pChunk The allocation chunk.
1322 * @param pSet The free set.
1323 */
1324DECLINLINE(void) gmmR0LinkChunk(PGMMCHUNK pChunk, PGMMCHUNKFREESET pSet)
1325{
1326 Assert(!pChunk->pSet);
1327 Assert(!pChunk->pFreeNext);
1328 Assert(!pChunk->pFreePrev);
1329
1330 if (pChunk->cFree > 0)
1331 {
1332 pChunk->pSet = pSet;
1333 pChunk->pFreePrev = NULL;
1334 unsigned iList = (pChunk->cFree - 1) >> GMM_CHUNK_FREE_SET_SHIFT;
1335 pChunk->pFreeNext = pSet->apLists[iList];
1336 if (pChunk->pFreeNext)
1337 pChunk->pFreeNext->pFreePrev = pChunk;
1338 pSet->apLists[iList] = pChunk;
1339
1340 pSet->cPages += pChunk->cFree;
1341 }
1342}
1343
1344
1345/**
1346 * Frees a Chunk ID.
1347 *
1348 * @param pGMM Pointer to the GMM instance.
1349 * @param idChunk The Chunk ID to free.
1350 */
1351static void gmmR0FreeChunkId(PGMM pGMM, uint32_t idChunk)
1352{
1353 AssertReturnVoid(idChunk != NIL_GMM_CHUNKID);
1354 AssertMsg(ASMBitTest(&pGMM->bmChunkId[0], idChunk), ("%#x\n", idChunk));
1355 ASMAtomicBitClear(&pGMM->bmChunkId[0], idChunk);
1356}
1357
1358
1359/**
1360 * Allocates a new Chunk ID.
1361 *
1362 * @returns The Chunk ID.
1363 * @param pGMM Pointer to the GMM instance.
1364 */
1365static uint32_t gmmR0AllocateChunkId(PGMM pGMM)
1366{
1367 AssertCompile(!((GMM_CHUNKID_LAST + 1) & 31)); /* must be a multiple of 32 */
1368 AssertCompile(NIL_GMM_CHUNKID == 0);
1369
1370 /*
1371 * Try the next sequential one.
1372 */
1373 int32_t idChunk = ++pGMM->idChunkPrev;
1374#if 0 /* test the fallback first */
1375 if ( idChunk <= GMM_CHUNKID_LAST
1376 && idChunk > NIL_GMM_CHUNKID
1377 && !ASMAtomicBitTestAndSet(&pVMM->bmChunkId[0], idChunk))
1378 return idChunk;
1379#endif
1380
1381 /*
1382 * Scan sequentially from the last one.
1383 */
1384 if ( (uint32_t)idChunk < GMM_CHUNKID_LAST
1385 && idChunk > NIL_GMM_CHUNKID)
1386 {
1387 idChunk = ASMBitNextClear(&pGMM->bmChunkId[0], GMM_CHUNKID_LAST + 1, idChunk);
1388 if (idChunk > NIL_GMM_CHUNKID)
1389 {
1390 AssertMsgReturn(!ASMAtomicBitTestAndSet(&pGMM->bmChunkId[0], idChunk), ("%#x\n", idChunk), NIL_GMM_CHUNKID);
1391 return pGMM->idChunkPrev = idChunk;
1392 }
1393 }
1394
1395 /*
1396 * Ok, scan from the start.
1397 * We're not racing anyone, so there is no need to expect failures or have restart loops.
1398 */
1399 idChunk = ASMBitFirstClear(&pGMM->bmChunkId[0], GMM_CHUNKID_LAST + 1);
1400 AssertMsgReturn(idChunk > NIL_GMM_CHUNKID, ("%#x\n", idChunk), NIL_GVM_HANDLE);
1401 AssertMsgReturn(!ASMAtomicBitTestAndSet(&pGMM->bmChunkId[0], idChunk), ("%#x\n", idChunk), NIL_GMM_CHUNKID);
1402
1403 return pGMM->idChunkPrev = idChunk;
1404}
1405
1406
1407/**
1408 * Registers a new chunk of memory.
1409 *
1410 * This is called by both gmmR0AllocateOneChunk and GMMR0SeedChunk. Will take
1411 * the mutex, the caller must not own it.
1412 *
1413 * @returns VBox status code.
1414 * @param pGMM Pointer to the GMM instance.
1415 * @param pSet Pointer to the set.
1416 * @param MemObj The memory object for the chunk.
1417 * @param hGVM The affinity of the chunk. NIL_GVM_HANDLE for no
1418 * affinity.
1419 */
1420static int gmmR0RegisterChunk(PGMM pGMM, PGMMCHUNKFREESET pSet, RTR0MEMOBJ MemObj, uint16_t hGVM)
1421{
1422 Assert(hGVM != NIL_GVM_HANDLE || pGMM->fBoundMemoryMode);
1423
1424 int rc;
1425 PGMMCHUNK pChunk = (PGMMCHUNK)RTMemAllocZ(sizeof(*pChunk));
1426 if (pChunk)
1427 {
1428 /*
1429 * Initialize it.
1430 */
1431 pChunk->MemObj = MemObj;
1432 pChunk->cFree = GMM_CHUNK_NUM_PAGES;
1433 pChunk->hGVM = hGVM;
1434 pChunk->iFreeHead = 0;
1435 for (unsigned iPage = 0; iPage < RT_ELEMENTS(pChunk->aPages) - 1; iPage++)
1436 {
1437 pChunk->aPages[iPage].Free.u2State = GMM_PAGE_STATE_FREE;
1438 pChunk->aPages[iPage].Free.iNext = iPage + 1;
1439 }
1440 pChunk->aPages[RT_ELEMENTS(pChunk->aPages) - 1].Free.u2State = GMM_PAGE_STATE_FREE;
1441 pChunk->aPages[RT_ELEMENTS(pChunk->aPages) - 1].Free.iNext = UINT16_MAX;
1442
1443 /*
1444 * Allocate a Chunk ID and insert it into the tree.
1445 * This has to be done behind the mutex of course.
1446 */
1447 rc = RTSemFastMutexRequest(pGMM->Mtx);
1448 if (RT_SUCCESS(rc))
1449 {
1450 pChunk->Core.Key = gmmR0AllocateChunkId(pGMM);
1451 if ( pChunk->Core.Key != NIL_GMM_CHUNKID
1452 && pChunk->Core.Key <= GMM_CHUNKID_LAST
1453 && RTAvlU32Insert(&pGMM->pChunks, &pChunk->Core))
1454 {
1455 pGMM->cChunks++;
1456 gmmR0LinkChunk(pChunk, pSet);
1457 LogFlow(("gmmR0RegisterChunk: pChunk=%p id=%#x cChunks=%d\n", pChunk, pChunk->Core.Key, pGMM->cChunks));
1458 RTSemFastMutexRelease(pGMM->Mtx);
1459 return VINF_SUCCESS;
1460 }
1461
1462 /* bail out */
1463 rc = VERR_INTERNAL_ERROR;
1464 RTSemFastMutexRelease(pGMM->Mtx);
1465 }
1466 RTMemFree(pChunk);
1467 }
1468 else
1469 rc = VERR_NO_MEMORY;
1470 return rc;
1471}
1472
1473
1474/**
1475 * Allocate one new chunk and add it to the specified free set.
1476 *
1477 * @returns VBox status code.
1478 * @param pGMM Pointer to the GMM instance.
1479 * @param pSet Pointer to the set.
1480 * @param hGVM The affinity of the new chunk.
1481 *
1482 * @remarks Called without owning the mutex.
1483 */
1484static int gmmR0AllocateOneChunk(PGMM pGMM, PGMMCHUNKFREESET pSet, uint16_t hGVM)
1485{
1486 /*
1487 * Allocate the memory.
1488 */
1489 RTR0MEMOBJ MemObj;
1490 int rc = RTR0MemObjAllocPhysNC(&MemObj, GMM_CHUNK_SIZE, NIL_RTHCPHYS);
1491 if (RT_SUCCESS(rc))
1492 {
1493 rc = gmmR0RegisterChunk(pGMM, pSet, MemObj, hGVM);
1494 if (RT_FAILURE(rc))
1495 RTR0MemObjFree(MemObj, false /* fFreeMappings */);
1496 }
1497 /** @todo Check that RTR0MemObjAllocPhysNC always returns VERR_NO_MEMORY on
1498 * allocation failure. */
1499 return rc;
1500}
1501
1502
1503/**
1504 * Attempts to allocate more pages until the requested amount is met.
1505 *
1506 * @returns VBox status code.
1507 * @param pGMM Pointer to the GMM instance data.
1508 * @param pGVM The calling VM.
1509 * @param pSet Pointer to the free set to grow.
1510 * @param cPages The number of pages needed.
1511 *
1512 * @remarks Called owning the mutex, but will leave it temporarily while
1513 * allocating the memory!
1514 */
1515static int gmmR0AllocateMoreChunks(PGMM pGMM, PGVM pGVM, PGMMCHUNKFREESET pSet, uint32_t cPages)
1516{
1517 Assert(!pGMM->fLegacyAllocationMode);
1518
1519 if (!pGMM->fBoundMemoryMode)
1520 {
1521 /*
1522 * Try steal free chunks from the other set first. (Only take 100% free chunks.)
1523 */
1524 PGMMCHUNKFREESET pOtherSet = pSet == &pGMM->Private ? &pGMM->Shared : &pGMM->Private;
1525 while ( pSet->cPages < cPages
1526 && pOtherSet->cPages >= GMM_CHUNK_NUM_PAGES)
1527 {
1528 PGMMCHUNK pChunk = pOtherSet->apLists[GMM_CHUNK_FREE_SET_LISTS - 1];
1529 while (pChunk && pChunk->cFree != GMM_CHUNK_NUM_PAGES)
1530 pChunk = pChunk->pFreeNext;
1531 if (!pChunk)
1532 break;
1533
1534 gmmR0UnlinkChunk(pChunk);
1535 gmmR0LinkChunk(pChunk, pSet);
1536 }
1537
1538 /*
1539 * If we need still more pages, allocate new chunks.
1540 * Note! We will leave the mutex while doing the allocation,
1541 * gmmR0AllocateOneChunk will re-take it temporarily while registering the chunk.
1542 */
1543 while (pSet->cPages < cPages)
1544 {
1545 RTSemFastMutexRelease(pGMM->Mtx);
1546 int rc = gmmR0AllocateOneChunk(pGMM, pSet, NIL_GVM_HANDLE);
1547 int rc2 = RTSemFastMutexRequest(pGMM->Mtx);
1548 AssertRCReturn(rc2, rc2);
1549 if (RT_FAILURE(rc))
1550 return rc;
1551 }
1552 }
1553 else
1554 {
1555 /*
1556 * The memory is bound to the VM allocating it, so we have to count
1557 * the free pages carefully as well as making sure we set brand it
1558 * with our VM handle.
1559 *
1560 * Note! We will leave the mutex while doing the allocation,
1561 * gmmR0AllocateOneChunk will re-take it temporarily while registering the chunk.
1562 */
1563 uint16_t const hGVM = pGVM->hSelf;
1564 for (;;)
1565 {
1566 /* Count and see if we've reached the goal. */
1567 uint32_t cPagesFound = 0;
1568 for (unsigned i = 0; i < RT_ELEMENTS(pSet->apLists); i++)
1569 for (PGMMCHUNK pCur = pSet->apLists[i]; pCur; pCur = pCur->pFreeNext)
1570 if (pCur->hGVM == hGVM)
1571 {
1572 cPagesFound += pCur->cFree;
1573 if (cPagesFound >= cPages)
1574 break;
1575 }
1576 if (cPagesFound >= cPages)
1577 break;
1578
1579 /* Allocate more. */
1580 RTSemFastMutexRelease(pGMM->Mtx);
1581 int rc = gmmR0AllocateOneChunk(pGMM, pSet, hGVM);
1582 int rc2 = RTSemFastMutexRequest(pGMM->Mtx);
1583 AssertRCReturn(rc2, rc2);
1584 if (RT_FAILURE(rc))
1585 return rc;
1586 }
1587 }
1588
1589 return VINF_SUCCESS;
1590}
1591
1592
1593/**
1594 * Allocates one private page.
1595 *
1596 * Worker for gmmR0AllocatePages.
1597 *
1598 * @param pGMM Pointer to the GMM instance data.
1599 * @param hGVM The GVM handle of the VM requesting memory.
1600 * @param pChunk The chunk to allocate it from.
1601 * @param pPageDesc The page descriptor.
1602 */
1603static void gmmR0AllocatePage(PGMM pGMM, uint32_t hGVM, PGMMCHUNK pChunk, PGMMPAGEDESC pPageDesc)
1604{
1605 /* update the chunk stats. */
1606 if (pChunk->hGVM == NIL_GVM_HANDLE)
1607 pChunk->hGVM = hGVM;
1608 Assert(pChunk->cFree);
1609 pChunk->cFree--;
1610 pChunk->cPrivate++;
1611
1612 /* unlink the first free page. */
1613 const uint32_t iPage = pChunk->iFreeHead;
1614 AssertReleaseMsg(iPage < RT_ELEMENTS(pChunk->aPages), ("%d\n", iPage));
1615 PGMMPAGE pPage = &pChunk->aPages[iPage];
1616 Assert(GMM_PAGE_IS_FREE(pPage));
1617 pChunk->iFreeHead = pPage->Free.iNext;
1618 Log3(("A pPage=%p iPage=%#x/%#x u2State=%d iFreeHead=%#x iNext=%#x\n",
1619 pPage, iPage, (pChunk->Core.Key << GMM_CHUNKID_SHIFT) | iPage,
1620 pPage->Common.u2State, pChunk->iFreeHead, pPage->Free.iNext));
1621
1622 /* make the page private. */
1623 pPage->u = 0;
1624 AssertCompile(GMM_PAGE_STATE_PRIVATE == 0);
1625 pPage->Private.hGVM = hGVM;
1626 AssertCompile(NIL_RTHCPHYS >= GMM_GCPHYS_LAST);
1627 AssertCompile(GMM_GCPHYS_UNSHAREABLE >= GMM_GCPHYS_LAST);
1628 if (pPageDesc->HCPhysGCPhys <= GMM_GCPHYS_LAST)
1629 pPage->Private.pfn = pPageDesc->HCPhysGCPhys >> PAGE_SHIFT;
1630 else
1631 pPage->Private.pfn = GMM_PAGE_PFN_UNSHAREABLE; /* unshareable / unassigned - same thing. */
1632
1633 /* update the page descriptor. */
1634 pPageDesc->HCPhysGCPhys = RTR0MemObjGetPagePhysAddr(pChunk->MemObj, iPage);
1635 Assert(pPageDesc->HCPhysGCPhys != NIL_RTHCPHYS);
1636 pPageDesc->idPage = (pChunk->Core.Key << GMM_CHUNKID_SHIFT) | iPage;
1637 pPageDesc->idSharedPage = NIL_GMM_PAGEID;
1638}
1639
1640
1641/**
1642 * Common worker for GMMR0AllocateHandyPages and GMMR0AllocatePages.
1643 *
1644 * @returns VBox status code:
1645 * @retval VINF_SUCCESS on success.
1646 * @retval VERR_GMM_SEED_ME if seeding via GMMR0SeedChunk or
1647 * gmmR0AllocateMoreChunks is necessary.
1648 * @retval VERR_GMM_HIT_GLOBAL_LIMIT if we've exhausted the available pages.
1649 * @retval VERR_GMM_HIT_VM_ACCOUNT_LIMIT if we've hit the VM account limit,
1650 * that is we're trying to allocate more than we've reserved.
1651 *
1652 * @param pGMM Pointer to the GMM instance data.
1653 * @param pGVM Pointer to the shared VM structure.
1654 * @param cPages The number of pages to allocate.
1655 * @param paPages Pointer to the page descriptors.
1656 * See GMMPAGEDESC for details on what is expected on input.
1657 * @param enmAccount The account to charge.
1658 */
1659static int gmmR0AllocatePages(PGMM pGMM, PGVM pGVM, uint32_t cPages, PGMMPAGEDESC paPages, GMMACCOUNT enmAccount)
1660{
1661 /*
1662 * Check allocation limits.
1663 */
1664 if (RT_UNLIKELY(pGMM->cAllocatedPages + cPages > pGMM->cMaxPages))
1665 return VERR_GMM_HIT_GLOBAL_LIMIT;
1666
1667 switch (enmAccount)
1668 {
1669 case GMMACCOUNT_BASE:
1670 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cBasePages + cPages > pGVM->gmm.s.Reserved.cBasePages))
1671 {
1672 Log(("gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
1673 pGVM->gmm.s.Reserved.cBasePages, pGVM->gmm.s.Allocated.cBasePages, cPages));
1674 return VERR_GMM_HIT_VM_ACCOUNT_LIMIT;
1675 }
1676 break;
1677 case GMMACCOUNT_SHADOW:
1678 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cShadowPages + cPages > pGVM->gmm.s.Reserved.cShadowPages))
1679 {
1680 Log(("gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
1681 pGVM->gmm.s.Reserved.cShadowPages, pGVM->gmm.s.Allocated.cShadowPages, cPages));
1682 return VERR_GMM_HIT_VM_ACCOUNT_LIMIT;
1683 }
1684 break;
1685 case GMMACCOUNT_FIXED:
1686 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cFixedPages + cPages > pGVM->gmm.s.Reserved.cFixedPages))
1687 {
1688 Log(("gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
1689 pGVM->gmm.s.Reserved.cFixedPages, pGVM->gmm.s.Allocated.cFixedPages, cPages));
1690 return VERR_GMM_HIT_VM_ACCOUNT_LIMIT;
1691 }
1692 break;
1693 default:
1694 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
1695 }
1696
1697 /*
1698 * Check if we need to allocate more memory or not. In bound memory mode this
1699 * is a bit extra work but it's easier to do it upfront than bailing out later.
1700 */
1701 PGMMCHUNKFREESET pSet = &pGMM->Private;
1702 if (pSet->cPages < cPages)
1703 return VERR_GMM_SEED_ME;
1704 if (pGMM->fBoundMemoryMode)
1705 {
1706 uint16_t hGVM = pGVM->hSelf;
1707 uint32_t cPagesFound = 0;
1708 for (unsigned i = 0; i < RT_ELEMENTS(pSet->apLists); i++)
1709 for (PGMMCHUNK pCur = pSet->apLists[i]; pCur; pCur = pCur->pFreeNext)
1710 if (pCur->hGVM == hGVM)
1711 {
1712 cPagesFound += pCur->cFree;
1713 if (cPagesFound >= cPages)
1714 break;
1715 }
1716 if (cPagesFound < cPages)
1717 return VERR_GMM_SEED_ME;
1718 }
1719
1720 /*
1721 * Pick the pages.
1722 * Try make some effort keeping VMs sharing private chunks.
1723 */
1724 uint16_t hGVM = pGVM->hSelf;
1725 uint32_t iPage = 0;
1726
1727 /* first round, pick from chunks with an affinity to the VM. */
1728 for (unsigned i = 0; i < RT_ELEMENTS(pSet->apLists) && iPage < cPages; i++)
1729 {
1730 PGMMCHUNK pCurFree = NULL;
1731 PGMMCHUNK pCur = pSet->apLists[i];
1732 while (pCur && iPage < cPages)
1733 {
1734 PGMMCHUNK pNext = pCur->pFreeNext;
1735
1736 if ( pCur->hGVM == hGVM
1737 && pCur->cFree < GMM_CHUNK_NUM_PAGES)
1738 {
1739 gmmR0UnlinkChunk(pCur);
1740 for (; pCur->cFree && iPage < cPages; iPage++)
1741 gmmR0AllocatePage(pGMM, hGVM, pCur, &paPages[iPage]);
1742 gmmR0LinkChunk(pCur, pSet);
1743 }
1744
1745 pCur = pNext;
1746 }
1747 }
1748
1749 if (iPage < cPages)
1750 {
1751 /* second round, pick pages from the 100% empty chunks we just skipped above. */
1752 PGMMCHUNK pCurFree = NULL;
1753 PGMMCHUNK pCur = pSet->apLists[RT_ELEMENTS(pSet->apLists) - 1];
1754 while (pCur && iPage < cPages)
1755 {
1756 PGMMCHUNK pNext = pCur->pFreeNext;
1757
1758 if ( pCur->cFree == GMM_CHUNK_NUM_PAGES
1759 && ( pCur->hGVM == hGVM
1760 || !pGMM->fBoundMemoryMode))
1761 {
1762 gmmR0UnlinkChunk(pCur);
1763 for (; pCur->cFree && iPage < cPages; iPage++)
1764 gmmR0AllocatePage(pGMM, hGVM, pCur, &paPages[iPage]);
1765 gmmR0LinkChunk(pCur, pSet);
1766 }
1767
1768 pCur = pNext;
1769 }
1770 }
1771
1772 if ( iPage < cPages
1773 && !pGMM->fBoundMemoryMode)
1774 {
1775 /* third round, disregard affinity. */
1776 unsigned i = RT_ELEMENTS(pSet->apLists);
1777 while (i-- > 0 && iPage < cPages)
1778 {
1779 PGMMCHUNK pCurFree = NULL;
1780 PGMMCHUNK pCur = pSet->apLists[i];
1781 while (pCur && iPage < cPages)
1782 {
1783 PGMMCHUNK pNext = pCur->pFreeNext;
1784
1785 if ( pCur->cFree > GMM_CHUNK_NUM_PAGES / 2
1786 && cPages >= GMM_CHUNK_NUM_PAGES / 2)
1787 pCur->hGVM = hGVM; /* change chunk affinity */
1788
1789 gmmR0UnlinkChunk(pCur);
1790 for (; pCur->cFree && iPage < cPages; iPage++)
1791 gmmR0AllocatePage(pGMM, hGVM, pCur, &paPages[iPage]);
1792 gmmR0LinkChunk(pCur, pSet);
1793
1794 pCur = pNext;
1795 }
1796 }
1797 }
1798
1799 /*
1800 * Update the account.
1801 */
1802 switch (enmAccount)
1803 {
1804 case GMMACCOUNT_BASE: pGVM->gmm.s.Allocated.cBasePages += iPage; break;
1805 case GMMACCOUNT_SHADOW: pGVM->gmm.s.Allocated.cShadowPages += iPage; break;
1806 case GMMACCOUNT_FIXED: pGVM->gmm.s.Allocated.cFixedPages += iPage; break;
1807 default:
1808 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
1809 }
1810 pGVM->gmm.s.cPrivatePages += iPage;
1811 pGMM->cAllocatedPages += iPage;
1812
1813 AssertMsgReturn(iPage == cPages, ("%u != %u\n", iPage, cPages), VERR_INTERNAL_ERROR);
1814
1815 /*
1816 * Check if we've reached some threshold and should kick one or two VMs and tell
1817 * them to inflate their balloons a bit more... later.
1818 */
1819
1820 return VINF_SUCCESS;
1821}
1822
1823
1824/**
1825 * Updates the previous allocations and allocates more pages.
1826 *
1827 * The handy pages are always taken from the 'base' memory account.
1828 * The allocated pages are not cleared and will contains random garbage.
1829 *
1830 * @returns VBox status code:
1831 * @retval VINF_SUCCESS on success.
1832 * @retval VERR_NOT_OWNER if the caller is not an EMT.
1833 * @retval VERR_GMM_PAGE_NOT_FOUND if one of the pages to update wasn't found.
1834 * @retval VERR_GMM_PAGE_NOT_PRIVATE if one of the pages to update wasn't a
1835 * private page.
1836 * @retval VERR_GMM_PAGE_NOT_SHARED if one of the pages to update wasn't a
1837 * shared page.
1838 * @retval VERR_GMM_NOT_PAGE_OWNER if one of the pages to be updated wasn't
1839 * owned by the VM.
1840 * @retval VERR_GMM_SEED_ME if seeding via GMMR0SeedChunk is necessary.
1841 * @retval VERR_GMM_HIT_GLOBAL_LIMIT if we've exhausted the available pages.
1842 * @retval VERR_GMM_HIT_VM_ACCOUNT_LIMIT if we've hit the VM account limit,
1843 * that is we're trying to allocate more than we've reserved.
1844 *
1845 * @param pVM Pointer to the shared VM structure.
1846 * @param idCpu VCPU id
1847 * @param cPagesToUpdate The number of pages to update (starting from the head).
1848 * @param cPagesToAlloc The number of pages to allocate (starting from the head).
1849 * @param paPages The array of page descriptors.
1850 * See GMMPAGEDESC for details on what is expected on input.
1851 * @thread EMT.
1852 */
1853GMMR0DECL(int) GMMR0AllocateHandyPages(PVM pVM, unsigned idCpu, uint32_t cPagesToUpdate, uint32_t cPagesToAlloc, PGMMPAGEDESC paPages)
1854{
1855 LogFlow(("GMMR0AllocateHandyPages: pVM=%p cPagesToUpdate=%#x cPagesToAlloc=%#x paPages=%p\n",
1856 pVM, cPagesToUpdate, cPagesToAlloc, paPages));
1857
1858 /*
1859 * Validate, get basics and take the semaphore.
1860 * (This is a relatively busy path, so make predictions where possible.)
1861 */
1862 PGMM pGMM;
1863 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1864 PGVM pGVM = GVMMR0ByVM(pVM);
1865 if (RT_UNLIKELY(!pGVM))
1866 return VERR_INVALID_PARAMETER;
1867 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
1868 return VERR_NOT_OWNER;
1869
1870 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
1871 AssertMsgReturn( (cPagesToUpdate && cPagesToUpdate < 1024)
1872 || (cPagesToAlloc && cPagesToAlloc < 1024),
1873 ("cPagesToUpdate=%#x cPagesToAlloc=%#x\n", cPagesToUpdate, cPagesToAlloc),
1874 VERR_INVALID_PARAMETER);
1875
1876 unsigned iPage = 0;
1877 for (; iPage < cPagesToUpdate; iPage++)
1878 {
1879 AssertMsgReturn( ( paPages[iPage].HCPhysGCPhys <= GMM_GCPHYS_LAST
1880 && !(paPages[iPage].HCPhysGCPhys & PAGE_OFFSET_MASK))
1881 || paPages[iPage].HCPhysGCPhys == NIL_RTHCPHYS
1882 || paPages[iPage].HCPhysGCPhys == GMM_GCPHYS_UNSHAREABLE,
1883 ("#%#x: %RHp\n", iPage, paPages[iPage].HCPhysGCPhys),
1884 VERR_INVALID_PARAMETER);
1885 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
1886 /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
1887 ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
1888 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
1889 /*|| paPages[iPage].idSharedPage == NIL_GMM_PAGEID*/,
1890 ("#%#x: %#x\n", iPage, paPages[iPage].idSharedPage), VERR_INVALID_PARAMETER);
1891 }
1892
1893 for (; iPage < cPagesToAlloc; iPage++)
1894 {
1895 AssertMsgReturn(paPages[iPage].HCPhysGCPhys == NIL_RTHCPHYS, ("#%#x: %RHp\n", iPage, paPages[iPage].HCPhysGCPhys), VERR_INVALID_PARAMETER);
1896 AssertMsgReturn(paPages[iPage].idPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
1897 AssertMsgReturn(paPages[iPage].idSharedPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idSharedPage), VERR_INVALID_PARAMETER);
1898 }
1899
1900 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1901 AssertRC(rc);
1902
1903 /* No allocations before the initial reservation has been made! */
1904 if (RT_LIKELY( pGVM->gmm.s.Reserved.cBasePages
1905 && pGVM->gmm.s.Reserved.cFixedPages
1906 && pGVM->gmm.s.Reserved.cShadowPages))
1907 {
1908 /*
1909 * Perform the updates.
1910 * Stop on the first error.
1911 */
1912 for (iPage = 0; iPage < cPagesToUpdate; iPage++)
1913 {
1914 if (paPages[iPage].idPage != NIL_GMM_PAGEID)
1915 {
1916 PGMMPAGE pPage = gmmR0GetPage(pGMM, paPages[iPage].idPage);
1917 if (RT_LIKELY(pPage))
1918 {
1919 if (RT_LIKELY(GMM_PAGE_IS_PRIVATE(pPage)))
1920 {
1921 if (RT_LIKELY(pPage->Private.hGVM == pGVM->hSelf))
1922 {
1923 AssertCompile(NIL_RTHCPHYS > GMM_GCPHYS_LAST && GMM_GCPHYS_UNSHAREABLE > GMM_GCPHYS_LAST);
1924 if (RT_LIKELY(paPages[iPage].HCPhysGCPhys <= GMM_GCPHYS_LAST))
1925 pPage->Private.pfn = paPages[iPage].HCPhysGCPhys >> PAGE_SHIFT;
1926 else if (paPages[iPage].HCPhysGCPhys == GMM_GCPHYS_UNSHAREABLE)
1927 pPage->Private.pfn = GMM_PAGE_PFN_UNSHAREABLE;
1928 /* else: NIL_RTHCPHYS nothing */
1929
1930 paPages[iPage].idPage = NIL_GMM_PAGEID;
1931 paPages[iPage].HCPhysGCPhys = NIL_RTHCPHYS;
1932 }
1933 else
1934 {
1935 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not owner! hGVM=%#x hSelf=%#x\n",
1936 iPage, paPages[iPage].idPage, pPage->Private.hGVM, pGVM->hSelf));
1937 rc = VERR_GMM_NOT_PAGE_OWNER;
1938 break;
1939 }
1940 }
1941 else
1942 {
1943 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not private! %.*Rhxs\n", iPage, paPages[iPage].idPage, sizeof(*pPage), pPage));
1944 rc = VERR_GMM_PAGE_NOT_PRIVATE;
1945 break;
1946 }
1947 }
1948 else
1949 {
1950 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not found! (private)\n", iPage, paPages[iPage].idPage));
1951 rc = VERR_GMM_PAGE_NOT_FOUND;
1952 break;
1953 }
1954 }
1955
1956 if (paPages[iPage].idSharedPage != NIL_GMM_PAGEID)
1957 {
1958 PGMMPAGE pPage = gmmR0GetPage(pGMM, paPages[iPage].idSharedPage);
1959 if (RT_LIKELY(pPage))
1960 {
1961 if (RT_LIKELY(GMM_PAGE_IS_SHARED(pPage)))
1962 {
1963 AssertCompile(NIL_RTHCPHYS > GMM_GCPHYS_LAST && GMM_GCPHYS_UNSHAREABLE > GMM_GCPHYS_LAST);
1964 Assert(pPage->Shared.cRefs);
1965 Assert(pGVM->gmm.s.cSharedPages);
1966 Assert(pGVM->gmm.s.Allocated.cBasePages);
1967
1968 pGVM->gmm.s.cSharedPages--;
1969 pGVM->gmm.s.Allocated.cBasePages--;
1970 if (!--pPage->Shared.cRefs)
1971 gmmR0FreeSharedPage(pGMM, paPages[iPage].idSharedPage, pPage);
1972
1973 paPages[iPage].idSharedPage = NIL_GMM_PAGEID;
1974 }
1975 else
1976 {
1977 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not shared!\n", iPage, paPages[iPage].idSharedPage));
1978 rc = VERR_GMM_PAGE_NOT_SHARED;
1979 break;
1980 }
1981 }
1982 else
1983 {
1984 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not found! (shared)\n", iPage, paPages[iPage].idSharedPage));
1985 rc = VERR_GMM_PAGE_NOT_FOUND;
1986 break;
1987 }
1988 }
1989 }
1990
1991 /*
1992 * Join paths with GMMR0AllocatePages for the allocation.
1993 * Note! gmmR0AllocateMoreChunks may leave the protection of the mutex!
1994 */
1995 while (RT_SUCCESS(rc))
1996 {
1997 rc = gmmR0AllocatePages(pGMM, pGVM, cPagesToAlloc, paPages, GMMACCOUNT_BASE);
1998 if ( rc != VERR_GMM_SEED_ME
1999 || pGMM->fLegacyAllocationMode)
2000 break;
2001 rc = gmmR0AllocateMoreChunks(pGMM, pGVM, &pGMM->Private, cPagesToAlloc);
2002 }
2003 }
2004 else
2005 rc = VERR_WRONG_ORDER;
2006
2007 RTSemFastMutexRelease(pGMM->Mtx);
2008 LogFlow(("GMMR0AllocateHandyPages: returns %Rrc\n", rc));
2009 return rc;
2010}
2011
2012
2013/**
2014 * Allocate one or more pages.
2015 *
2016 * This is typically used for ROMs and MMIO2 (VRAM) during VM creation.
2017 * The allocated pages are not cleared and will contains random garbage.
2018 *
2019 * @returns VBox status code:
2020 * @retval VINF_SUCCESS on success.
2021 * @retval VERR_NOT_OWNER if the caller is not an EMT.
2022 * @retval VERR_GMM_SEED_ME if seeding via GMMR0SeedChunk is necessary.
2023 * @retval VERR_GMM_HIT_GLOBAL_LIMIT if we've exhausted the available pages.
2024 * @retval VERR_GMM_HIT_VM_ACCOUNT_LIMIT if we've hit the VM account limit,
2025 * that is we're trying to allocate more than we've reserved.
2026 *
2027 * @param pVM Pointer to the shared VM structure.
2028 * @param idCpu VCPU id
2029 * @param cPages The number of pages to allocate.
2030 * @param paPages Pointer to the page descriptors.
2031 * See GMMPAGEDESC for details on what is expected on input.
2032 * @param enmAccount The account to charge.
2033 *
2034 * @thread EMT.
2035 */
2036GMMR0DECL(int) GMMR0AllocatePages(PVM pVM, unsigned idCpu, uint32_t cPages, PGMMPAGEDESC paPages, GMMACCOUNT enmAccount)
2037{
2038 LogFlow(("GMMR0AllocatePages: pVM=%p cPages=%#x paPages=%p enmAccount=%d\n", pVM, cPages, paPages, enmAccount));
2039
2040 /*
2041 * Validate, get basics and take the semaphore.
2042 */
2043 PGMM pGMM;
2044 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2045 PGVM pGVM = GVMMR0ByVM(pVM);
2046 if (RT_UNLIKELY(!pGVM))
2047 return VERR_INVALID_PARAMETER;
2048 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
2049 return VERR_NOT_OWNER;
2050
2051 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
2052 AssertMsgReturn(enmAccount > GMMACCOUNT_INVALID && enmAccount < GMMACCOUNT_END, ("%d\n", enmAccount), VERR_INVALID_PARAMETER);
2053 AssertMsgReturn(cPages > 0 && cPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cPages), VERR_INVALID_PARAMETER);
2054
2055 for (unsigned iPage = 0; iPage < cPages; iPage++)
2056 {
2057 AssertMsgReturn( paPages[iPage].HCPhysGCPhys == NIL_RTHCPHYS
2058 || paPages[iPage].HCPhysGCPhys == GMM_GCPHYS_UNSHAREABLE
2059 || ( enmAccount == GMMACCOUNT_BASE
2060 && paPages[iPage].HCPhysGCPhys <= GMM_GCPHYS_LAST
2061 && !(paPages[iPage].HCPhysGCPhys & PAGE_OFFSET_MASK)),
2062 ("#%#x: %RHp enmAccount=%d\n", iPage, paPages[iPage].HCPhysGCPhys, enmAccount),
2063 VERR_INVALID_PARAMETER);
2064 AssertMsgReturn(paPages[iPage].idPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
2065 AssertMsgReturn(paPages[iPage].idSharedPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idSharedPage), VERR_INVALID_PARAMETER);
2066 }
2067
2068 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2069 AssertRC(rc);
2070
2071 /* No allocations before the initial reservation has been made! */
2072 if (RT_LIKELY( pGVM->gmm.s.Reserved.cBasePages
2073 && pGVM->gmm.s.Reserved.cFixedPages
2074 && pGVM->gmm.s.Reserved.cShadowPages))
2075 {
2076 /*
2077 * gmmR0AllocatePages seed loop.
2078 * Note! gmmR0AllocateMoreChunks may leave the protection of the mutex!
2079 */
2080 while (RT_SUCCESS(rc))
2081 {
2082 rc = gmmR0AllocatePages(pGMM, pGVM, cPages, paPages, enmAccount);
2083 if ( rc != VERR_GMM_SEED_ME
2084 || pGMM->fLegacyAllocationMode)
2085 break;
2086 rc = gmmR0AllocateMoreChunks(pGMM, pGVM, &pGMM->Private, cPages);
2087 }
2088 }
2089 else
2090 rc = VERR_WRONG_ORDER;
2091
2092 RTSemFastMutexRelease(pGMM->Mtx);
2093 LogFlow(("GMMR0AllocatePages: returns %Rrc\n", rc));
2094 return rc;
2095}
2096
2097
2098/**
2099 * VMMR0 request wrapper for GMMR0AllocatePages.
2100 *
2101 * @returns see GMMR0AllocatePages.
2102 * @param pVM Pointer to the shared VM structure.
2103 * @param idCpu VCPU id
2104 * @param pReq The request packet.
2105 */
2106GMMR0DECL(int) GMMR0AllocatePagesReq(PVM pVM, unsigned idCpu, PGMMALLOCATEPAGESREQ pReq)
2107{
2108 /*
2109 * Validate input and pass it on.
2110 */
2111 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2112 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2113 AssertMsgReturn(pReq->Hdr.cbReq >= RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[0]),
2114 ("%#x < %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[0])),
2115 VERR_INVALID_PARAMETER);
2116 AssertMsgReturn(pReq->Hdr.cbReq == RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[pReq->cPages]),
2117 ("%#x != %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[pReq->cPages])),
2118 VERR_INVALID_PARAMETER);
2119
2120 return GMMR0AllocatePages(pVM, idCpu, pReq->cPages, &pReq->aPages[0], pReq->enmAccount);
2121}
2122
2123
2124/**
2125 * Frees a chunk, giving it back to the host OS.
2126 *
2127 * @param pGMM Pointer to the GMM instance.
2128 * @param pGVM This is set when called from GMMR0CleanupVM so we can
2129 * unmap and free the chunk in one go.
2130 * @param pChunk The chunk to free.
2131 */
2132static void gmmR0FreeChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk)
2133{
2134 Assert(pChunk->Core.Key != NIL_GMM_CHUNKID);
2135
2136 /*
2137 * Cleanup hack! Unmap the chunk from the callers address space.
2138 */
2139 if ( pChunk->cMappings
2140 && pGVM)
2141 gmmR0UnmapChunk(pGMM, pGVM, pChunk);
2142
2143 /*
2144 * If there are current mappings of the chunk, then request the
2145 * VMs to unmap them. Reposition the chunk in the free list so
2146 * it won't be a likely candidate for allocations.
2147 */
2148 if (pChunk->cMappings)
2149 {
2150 /** @todo R0 -> VM request */
2151 }
2152 else
2153 {
2154 /*
2155 * Try free the memory object.
2156 */
2157 int rc = RTR0MemObjFree(pChunk->MemObj, false /* fFreeMappings */);
2158 if (RT_SUCCESS(rc))
2159 {
2160 pChunk->MemObj = NIL_RTR0MEMOBJ;
2161
2162 /*
2163 * Unlink it from everywhere.
2164 */
2165 gmmR0UnlinkChunk(pChunk);
2166
2167 PAVLU32NODECORE pCore = RTAvlU32Remove(&pGMM->pChunks, pChunk->Core.Key);
2168 Assert(pCore == &pChunk->Core); NOREF(pCore);
2169
2170 PGMMCHUNKTLBE pTlbe = &pGMM->ChunkTLB.aEntries[GMM_CHUNKTLB_IDX(pChunk->Core.Key)];
2171 if (pTlbe->pChunk == pChunk)
2172 {
2173 pTlbe->idChunk = NIL_GMM_CHUNKID;
2174 pTlbe->pChunk = NULL;
2175 }
2176
2177 Assert(pGMM->cChunks > 0);
2178 pGMM->cChunks--;
2179
2180 /*
2181 * Free the Chunk ID and struct.
2182 */
2183 gmmR0FreeChunkId(pGMM, pChunk->Core.Key);
2184 pChunk->Core.Key = NIL_GMM_CHUNKID;
2185
2186 RTMemFree(pChunk->paMappings);
2187 pChunk->paMappings = NULL;
2188
2189 RTMemFree(pChunk);
2190 }
2191 else
2192 AssertRC(rc);
2193 }
2194}
2195
2196
2197/**
2198 * Free page worker.
2199 *
2200 * The caller does all the statistic decrementing, we do all the incrementing.
2201 *
2202 * @param pGMM Pointer to the GMM instance data.
2203 * @param pChunk Pointer to the chunk this page belongs to.
2204 * @param idPage The Page ID.
2205 * @param pPage Pointer to the page.
2206 */
2207static void gmmR0FreePageWorker(PGMM pGMM, PGMMCHUNK pChunk, uint32_t idPage, PGMMPAGE pPage)
2208{
2209 Log3(("F pPage=%p iPage=%#x/%#x u2State=%d iFreeHead=%#x\n",
2210 pPage, pPage - &pChunk->aPages[0], idPage, pPage->Common.u2State, pChunk->iFreeHead)); NOREF(idPage);
2211
2212 /*
2213 * Put the page on the free list.
2214 */
2215 pPage->u = 0;
2216 pPage->Free.u2State = GMM_PAGE_STATE_FREE;
2217 Assert(pChunk->iFreeHead < RT_ELEMENTS(pChunk->aPages) || pChunk->iFreeHead == UINT16_MAX);
2218 pPage->Free.iNext = pChunk->iFreeHead;
2219 pChunk->iFreeHead = pPage - &pChunk->aPages[0];
2220
2221 /*
2222 * Update statistics (the cShared/cPrivate stats are up to date already),
2223 * and relink the chunk if necessary.
2224 */
2225 if ((pChunk->cFree & GMM_CHUNK_FREE_SET_MASK) == 0)
2226 {
2227 gmmR0UnlinkChunk(pChunk);
2228 pChunk->cFree++;
2229 gmmR0LinkChunk(pChunk, pChunk->cShared ? &pGMM->Shared : &pGMM->Private);
2230 }
2231 else
2232 {
2233 pChunk->cFree++;
2234 pChunk->pSet->cPages++;
2235
2236 /*
2237 * If the chunk becomes empty, consider giving memory back to the host OS.
2238 *
2239 * The current strategy is to try give it back if there are other chunks
2240 * in this free list, meaning if there are at least 240 free pages in this
2241 * category. Note that since there are probably mappings of the chunk,
2242 * it won't be freed up instantly, which probably screws up this logic
2243 * a bit...
2244 */
2245 if (RT_UNLIKELY( pChunk->cFree == GMM_CHUNK_NUM_PAGES
2246 && pChunk->pFreeNext
2247 && pChunk->pFreePrev
2248 && !pGMM->fLegacyAllocationMode))
2249 gmmR0FreeChunk(pGMM, NULL, pChunk);
2250 }
2251}
2252
2253
2254/**
2255 * Frees a shared page, the page is known to exist and be valid and such.
2256 *
2257 * @param pGMM Pointer to the GMM instance.
2258 * @param idPage The Page ID
2259 * @param pPage The page structure.
2260 */
2261DECLINLINE(void) gmmR0FreeSharedPage(PGMM pGMM, uint32_t idPage, PGMMPAGE pPage)
2262{
2263 PGMMCHUNK pChunk = gmmR0GetChunk(pGMM, idPage >> GMM_CHUNKID_SHIFT);
2264 Assert(pChunk);
2265 Assert(pChunk->cFree < GMM_CHUNK_NUM_PAGES);
2266 Assert(pChunk->cShared > 0);
2267 Assert(pGMM->cSharedPages > 0);
2268 Assert(pGMM->cAllocatedPages > 0);
2269 Assert(!pPage->Shared.cRefs);
2270
2271 pChunk->cShared--;
2272 pGMM->cAllocatedPages--;
2273 pGMM->cSharedPages--;
2274 gmmR0FreePageWorker(pGMM, pChunk, idPage, pPage);
2275}
2276
2277
2278/**
2279 * Frees a private page, the page is known to exist and be valid and such.
2280 *
2281 * @param pGMM Pointer to the GMM instance.
2282 * @param idPage The Page ID
2283 * @param pPage The page structure.
2284 */
2285DECLINLINE(void) gmmR0FreePrivatePage(PGMM pGMM, uint32_t idPage, PGMMPAGE pPage)
2286{
2287 PGMMCHUNK pChunk = gmmR0GetChunk(pGMM, idPage >> GMM_CHUNKID_SHIFT);
2288 Assert(pChunk);
2289 Assert(pChunk->cFree < GMM_CHUNK_NUM_PAGES);
2290 Assert(pChunk->cPrivate > 0);
2291 Assert(pGMM->cAllocatedPages > 0);
2292
2293 pChunk->cPrivate--;
2294 pGMM->cAllocatedPages--;
2295 gmmR0FreePageWorker(pGMM, pChunk, idPage, pPage);
2296}
2297
2298
2299/**
2300 * Common worker for GMMR0FreePages and GMMR0BalloonedPages.
2301 *
2302 * @returns VBox status code:
2303 * @retval xxx
2304 *
2305 * @param pGMM Pointer to the GMM instance data.
2306 * @param pGVM Pointer to the shared VM structure.
2307 * @param cPages The number of pages to free.
2308 * @param paPages Pointer to the page descriptors.
2309 * @param enmAccount The account this relates to.
2310 */
2311static int gmmR0FreePages(PGMM pGMM, PGVM pGVM, uint32_t cPages, PGMMFREEPAGEDESC paPages, GMMACCOUNT enmAccount)
2312{
2313 /*
2314 * Check that the request isn't impossible wrt to the account status.
2315 */
2316 switch (enmAccount)
2317 {
2318 case GMMACCOUNT_BASE:
2319 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cBasePages < cPages))
2320 {
2321 Log(("gmmR0FreePages: allocated=%#llx cPages=%#x!\n", pGVM->gmm.s.Allocated.cBasePages, cPages));
2322 return VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2323 }
2324 break;
2325 case GMMACCOUNT_SHADOW:
2326 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cShadowPages < cPages))
2327 {
2328 Log(("gmmR0FreePages: allocated=%#llx cPages=%#x!\n", pGVM->gmm.s.Allocated.cShadowPages, cPages));
2329 return VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2330 }
2331 break;
2332 case GMMACCOUNT_FIXED:
2333 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cFixedPages < cPages))
2334 {
2335 Log(("gmmR0FreePages: allocated=%#llx cPages=%#x!\n", pGVM->gmm.s.Allocated.cFixedPages, cPages));
2336 return VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2337 }
2338 break;
2339 default:
2340 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
2341 }
2342
2343 /*
2344 * Walk the descriptors and free the pages.
2345 *
2346 * Statistics (except the account) are being updated as we go along,
2347 * unlike the alloc code. Also, stop on the first error.
2348 */
2349 int rc = VINF_SUCCESS;
2350 uint32_t iPage;
2351 for (iPage = 0; iPage < cPages; iPage++)
2352 {
2353 uint32_t idPage = paPages[iPage].idPage;
2354 PGMMPAGE pPage = gmmR0GetPage(pGMM, idPage);
2355 if (RT_LIKELY(pPage))
2356 {
2357 if (RT_LIKELY(GMM_PAGE_IS_PRIVATE(pPage)))
2358 {
2359 if (RT_LIKELY(pPage->Private.hGVM == pGVM->hSelf))
2360 {
2361 Assert(pGVM->gmm.s.cPrivatePages);
2362 pGVM->gmm.s.cPrivatePages--;
2363 gmmR0FreePrivatePage(pGMM, idPage, pPage);
2364 }
2365 else
2366 {
2367 Log(("gmmR0AllocatePages: #%#x/%#x: not owner! hGVM=%#x hSelf=%#x\n", iPage, idPage,
2368 pPage->Private.hGVM, pGVM->hSelf));
2369 rc = VERR_GMM_NOT_PAGE_OWNER;
2370 break;
2371 }
2372 }
2373 else if (RT_LIKELY(GMM_PAGE_IS_SHARED(pPage)))
2374 {
2375 Assert(pGVM->gmm.s.cSharedPages);
2376 pGVM->gmm.s.cSharedPages--;
2377 Assert(pPage->Shared.cRefs);
2378 if (!--pPage->Shared.cRefs)
2379 gmmR0FreeSharedPage(pGMM, idPage, pPage);
2380 }
2381 else
2382 {
2383 Log(("gmmR0AllocatePages: #%#x/%#x: already free!\n", iPage, idPage));
2384 rc = VERR_GMM_PAGE_ALREADY_FREE;
2385 break;
2386 }
2387 }
2388 else
2389 {
2390 Log(("gmmR0AllocatePages: #%#x/%#x: not found!\n", iPage, idPage));
2391 rc = VERR_GMM_PAGE_NOT_FOUND;
2392 break;
2393 }
2394 paPages[iPage].idPage = NIL_GMM_PAGEID;
2395 }
2396
2397 /*
2398 * Update the account.
2399 */
2400 switch (enmAccount)
2401 {
2402 case GMMACCOUNT_BASE: pGVM->gmm.s.Allocated.cBasePages -= iPage; break;
2403 case GMMACCOUNT_SHADOW: pGVM->gmm.s.Allocated.cShadowPages -= iPage; break;
2404 case GMMACCOUNT_FIXED: pGVM->gmm.s.Allocated.cFixedPages -= iPage; break;
2405 default:
2406 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
2407 }
2408
2409 /*
2410 * Any threshold stuff to be done here?
2411 */
2412
2413 return rc;
2414}
2415
2416
2417/**
2418 * Free one or more pages.
2419 *
2420 * This is typically used at reset time or power off.
2421 *
2422 * @returns VBox status code:
2423 * @retval xxx
2424 *
2425 * @param pVM Pointer to the shared VM structure.
2426 * @param idCpu VCPU id
2427 * @param cPages The number of pages to allocate.
2428 * @param paPages Pointer to the page descriptors containing the Page IDs for each page.
2429 * @param enmAccount The account this relates to.
2430 * @thread EMT.
2431 */
2432GMMR0DECL(int) GMMR0FreePages(PVM pVM, unsigned idCpu, uint32_t cPages, PGMMFREEPAGEDESC paPages, GMMACCOUNT enmAccount)
2433{
2434 LogFlow(("GMMR0FreePages: pVM=%p cPages=%#x paPages=%p enmAccount=%d\n", pVM, cPages, paPages, enmAccount));
2435
2436 /*
2437 * Validate input and get the basics.
2438 */
2439 PGMM pGMM;
2440 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2441 PGVM pGVM = GVMMR0ByVM(pVM);
2442 if (RT_UNLIKELY(!pGVM))
2443 return VERR_INVALID_PARAMETER;
2444 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
2445 return VERR_NOT_OWNER;
2446
2447 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
2448 AssertMsgReturn(enmAccount > GMMACCOUNT_INVALID && enmAccount < GMMACCOUNT_END, ("%d\n", enmAccount), VERR_INVALID_PARAMETER);
2449 AssertMsgReturn(cPages > 0 && cPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cPages), VERR_INVALID_PARAMETER);
2450
2451 for (unsigned iPage = 0; iPage < cPages; iPage++)
2452 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
2453 /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
2454 ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
2455
2456 /*
2457 * Take the semaphore and call the worker function.
2458 */
2459 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2460 AssertRC(rc);
2461
2462 rc = gmmR0FreePages(pGMM, pGVM, cPages, paPages, enmAccount);
2463
2464 RTSemFastMutexRelease(pGMM->Mtx);
2465 LogFlow(("GMMR0FreePages: returns %Rrc\n", rc));
2466 return rc;
2467}
2468
2469
2470/**
2471 * VMMR0 request wrapper for GMMR0FreePages.
2472 *
2473 * @returns see GMMR0FreePages.
2474 * @param pVM Pointer to the shared VM structure.
2475 * @param idCpu VCPU id
2476 * @param pReq The request packet.
2477 */
2478GMMR0DECL(int) GMMR0FreePagesReq(PVM pVM, unsigned idCpu, PGMMFREEPAGESREQ pReq)
2479{
2480 /*
2481 * Validate input and pass it on.
2482 */
2483 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2484 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2485 AssertMsgReturn(pReq->Hdr.cbReq >= RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[0]),
2486 ("%#x < %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[0])),
2487 VERR_INVALID_PARAMETER);
2488 AssertMsgReturn(pReq->Hdr.cbReq == RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[pReq->cPages]),
2489 ("%#x != %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[pReq->cPages])),
2490 VERR_INVALID_PARAMETER);
2491
2492 return GMMR0FreePages(pVM, idCpu, pReq->cPages, &pReq->aPages[0], pReq->enmAccount);
2493}
2494
2495
2496/**
2497 * Report back on a memory ballooning request.
2498 *
2499 * The request may or may not have been initiated by the GMM. If it was initiated
2500 * by the GMM it is important that this function is called even if no pages was
2501 * ballooned.
2502 *
2503 * Since the whole purpose of ballooning is to free up guest RAM pages, this API
2504 * may also be given a set of related pages to be freed. These pages are assumed
2505 * to be on the base account.
2506 *
2507 * @returns VBox status code:
2508 * @retval xxx
2509 *
2510 * @param pVM Pointer to the shared VM structure.
2511 * @param idCpu VCPU id
2512 * @param cBalloonedPages The number of pages that was ballooned.
2513 * @param cPagesToFree The number of pages to be freed.
2514 * @param paPages Pointer to the page descriptors for the pages that's to be freed.
2515 * @param fCompleted Indicates whether the ballooning request was completed (true) or
2516 * if there is more pages to come (false). If the ballooning was not
2517 * not triggered by the GMM, don't set this.
2518 * @thread EMT.
2519 */
2520GMMR0DECL(int) GMMR0BalloonedPages(PVM pVM, unsigned idCpu, uint32_t cBalloonedPages, uint32_t cPagesToFree, PGMMFREEPAGEDESC paPages, bool fCompleted)
2521{
2522 LogFlow(("GMMR0BalloonedPages: pVM=%p cBalloonedPages=%#x cPagestoFree=%#x paPages=%p enmAccount=%d fCompleted=%RTbool\n",
2523 pVM, cBalloonedPages, cPagesToFree, paPages, fCompleted));
2524
2525 /*
2526 * Validate input and get the basics.
2527 */
2528 PGMM pGMM;
2529 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2530 PGVM pGVM = GVMMR0ByVM(pVM);
2531 if (RT_UNLIKELY(!pGVM))
2532 return VERR_INVALID_PARAMETER;
2533 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
2534 return VERR_NOT_OWNER;
2535
2536 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
2537 AssertMsgReturn(cBalloonedPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cBalloonedPages), VERR_INVALID_PARAMETER);
2538 AssertMsgReturn(cPagesToFree <= cBalloonedPages, ("%#x\n", cPagesToFree), VERR_INVALID_PARAMETER);
2539
2540 for (unsigned iPage = 0; iPage < cPagesToFree; iPage++)
2541 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
2542 /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
2543 ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
2544
2545 /*
2546 * Take the sempahore and do some more validations.
2547 */
2548 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2549 AssertRC(rc);
2550 if (pGVM->gmm.s.Allocated.cBasePages >= cPagesToFree)
2551 {
2552 /*
2553 * Record the ballooned memory.
2554 */
2555 pGMM->cBalloonedPages += cBalloonedPages;
2556 if (pGVM->gmm.s.cReqBalloonedPages)
2557 {
2558 pGVM->gmm.s.cBalloonedPages += cBalloonedPages;
2559 pGVM->gmm.s.cReqActuallyBalloonedPages += cBalloonedPages;
2560 if (fCompleted)
2561 {
2562 Log(("GMMR0BalloonedPages: +%#x - Global=%#llx; / VM: Total=%#llx Req=%#llx Actual=%#llx (completed)\n", cBalloonedPages,
2563 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages, pGVM->gmm.s.cReqBalloonedPages, pGVM->gmm.s.cReqActuallyBalloonedPages));
2564
2565 /*
2566 * Anything we need to do here now when the request has been completed?
2567 */
2568 pGVM->gmm.s.cReqBalloonedPages = 0;
2569 }
2570 else
2571 Log(("GMMR0BalloonedPages: +%#x - Global=%#llx / VM: Total=%#llx Req=%#llx Actual=%#llx (pending)\n", cBalloonedPages,
2572 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages, pGVM->gmm.s.cReqBalloonedPages, pGVM->gmm.s.cReqActuallyBalloonedPages));
2573 }
2574 else
2575 {
2576 pGVM->gmm.s.cBalloonedPages += cBalloonedPages;
2577 Log(("GMMR0BalloonedPages: +%#x - Global=%#llx / VM: Total=%#llx (user)\n",
2578 cBalloonedPages, pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages));
2579 }
2580
2581 /*
2582 * Any pages to free?
2583 */
2584 if (cPagesToFree)
2585 rc = gmmR0FreePages(pGMM, pGVM, cPagesToFree, paPages, GMMACCOUNT_BASE);
2586 }
2587 else
2588 {
2589 rc = VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2590 }
2591
2592 RTSemFastMutexRelease(pGMM->Mtx);
2593 LogFlow(("GMMR0BalloonedPages: returns %Rrc\n", rc));
2594 return rc;
2595}
2596
2597
2598/**
2599 * VMMR0 request wrapper for GMMR0BalloonedPages.
2600 *
2601 * @returns see GMMR0BalloonedPages.
2602 * @param pVM Pointer to the shared VM structure.
2603 * @param idCpu VCPU id
2604 * @param pReq The request packet.
2605 */
2606GMMR0DECL(int) GMMR0BalloonedPagesReq(PVM pVM, unsigned idCpu, PGMMBALLOONEDPAGESREQ pReq)
2607{
2608 /*
2609 * Validate input and pass it on.
2610 */
2611 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2612 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2613 AssertMsgReturn(pReq->Hdr.cbReq >= RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[0]),
2614 ("%#x < %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[0])),
2615 VERR_INVALID_PARAMETER);
2616 AssertMsgReturn(pReq->Hdr.cbReq == RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[pReq->cPagesToFree]),
2617 ("%#x != %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[pReq->cPagesToFree])),
2618 VERR_INVALID_PARAMETER);
2619
2620 return GMMR0BalloonedPages(pVM, idCpu, pReq->cBalloonedPages, pReq->cPagesToFree, &pReq->aPages[0], pReq->fCompleted);
2621}
2622
2623
2624/**
2625 * Report balloon deflating.
2626 *
2627 * @returns VBox status code:
2628 * @retval xxx
2629 *
2630 * @param pVM Pointer to the shared VM structure.
2631 * @param idCpu VCPU id
2632 * @param cPages The number of pages that was let out of the balloon.
2633 * @thread EMT.
2634 */
2635GMMR0DECL(int) GMMR0DeflatedBalloon(PVM pVM, unsigned idCpu, uint32_t cPages)
2636{
2637 LogFlow(("GMMR0DeflatedBalloon: pVM=%p cPages=%#x\n", pVM, cPages));
2638
2639 /*
2640 * Validate input and get the basics.
2641 */
2642 PGMM pGMM;
2643 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2644 PGVM pGVM = GVMMR0ByVM(pVM);
2645 if (RT_UNLIKELY(!pGVM))
2646 return VERR_INVALID_PARAMETER;
2647 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
2648 return VERR_NOT_OWNER;
2649
2650 AssertMsgReturn(cPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cPages), VERR_INVALID_PARAMETER);
2651
2652 /*
2653 * Take the sempahore and do some more validations.
2654 */
2655 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2656 AssertRC(rc);
2657
2658 if (pGVM->gmm.s.cBalloonedPages < cPages)
2659 {
2660 Assert(pGMM->cBalloonedPages >= pGVM->gmm.s.cBalloonedPages);
2661
2662 /*
2663 * Record it.
2664 */
2665 pGMM->cBalloonedPages -= cPages;
2666 pGVM->gmm.s.cBalloonedPages -= cPages;
2667 if (pGVM->gmm.s.cReqDeflatePages)
2668 {
2669 Log(("GMMR0BalloonedPages: -%#x - Global=%#llx / VM: Total=%#llx Req=%#llx\n", cPages,
2670 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages, pGVM->gmm.s.cReqDeflatePages));
2671
2672 /*
2673 * Anything we need to do here now when the request has been completed?
2674 */
2675 pGVM->gmm.s.cReqDeflatePages = 0;
2676 }
2677 else
2678 Log(("GMMR0BalloonedPages: -%#x - Global=%#llx / VM: Total=%#llx\n", cPages,
2679 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages));
2680 }
2681 else
2682 {
2683 Log(("GMMR0DeflatedBalloon: cBalloonedPages=%#llx cPages=%#x\n", pGVM->gmm.s.cBalloonedPages, cPages));
2684 rc = VERR_GMM_ATTEMPT_TO_DEFLATE_TOO_MUCH;
2685 }
2686
2687 RTSemFastMutexRelease(pGMM->Mtx);
2688 LogFlow(("GMMR0BalloonedPages: returns %Rrc\n", rc));
2689 return rc;
2690}
2691
2692
2693/**
2694 * Unmaps a chunk previously mapped into the address space of the current process.
2695 *
2696 * @returns VBox status code.
2697 * @param pGMM Pointer to the GMM instance data.
2698 * @param pGVM Pointer to the Global VM structure.
2699 * @param pChunk Pointer to the chunk to be unmapped.
2700 */
2701static int gmmR0UnmapChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk)
2702{
2703 if (!pGMM->fLegacyAllocationMode)
2704 {
2705 /*
2706 * Find the mapping and try unmapping it.
2707 */
2708 for (uint32_t i = 0; i < pChunk->cMappings; i++)
2709 {
2710 Assert(pChunk->paMappings[i].pGVM && pChunk->paMappings[i].MapObj != NIL_RTR0MEMOBJ);
2711 if (pChunk->paMappings[i].pGVM == pGVM)
2712 {
2713 /* unmap */
2714 int rc = RTR0MemObjFree(pChunk->paMappings[i].MapObj, false /* fFreeMappings (NA) */);
2715 if (RT_SUCCESS(rc))
2716 {
2717 /* update the record. */
2718 pChunk->cMappings--;
2719 if (i < pChunk->cMappings)
2720 pChunk->paMappings[i] = pChunk->paMappings[pChunk->cMappings];
2721 pChunk->paMappings[pChunk->cMappings].MapObj = NIL_RTR0MEMOBJ;
2722 pChunk->paMappings[pChunk->cMappings].pGVM = NULL;
2723 }
2724 return rc;
2725 }
2726 }
2727 }
2728 else if (pChunk->hGVM == pGVM->hSelf)
2729 return VINF_SUCCESS;
2730
2731 Log(("gmmR0MapChunk: Chunk %#x is not mapped into pGVM=%p/%#x\n", pChunk->Core.Key, pGVM, pGVM->hSelf));
2732 return VERR_GMM_CHUNK_NOT_MAPPED;
2733}
2734
2735
2736/**
2737 * Maps a chunk into the user address space of the current process.
2738 *
2739 * @returns VBox status code.
2740 * @param pGMM Pointer to the GMM instance data.
2741 * @param pGVM Pointer to the Global VM structure.
2742 * @param pChunk Pointer to the chunk to be mapped.
2743 * @param ppvR3 Where to store the ring-3 address of the mapping.
2744 * In the VERR_GMM_CHUNK_ALREADY_MAPPED case, this will be
2745 * contain the address of the existing mapping.
2746 */
2747static int gmmR0MapChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk, PRTR3PTR ppvR3)
2748{
2749 /*
2750 * If we're in legacy mode this is simple.
2751 */
2752 if (pGMM->fLegacyAllocationMode)
2753 {
2754 if (pChunk->hGVM != pGVM->hSelf)
2755 {
2756 Log(("gmmR0MapChunk: chunk %#x is already mapped at %p!\n", pChunk->Core.Key, *ppvR3));
2757 return VERR_GMM_CHUNK_NOT_FOUND;
2758 }
2759
2760 *ppvR3 = RTR0MemObjAddressR3(pChunk->MemObj);
2761 return VINF_SUCCESS;
2762 }
2763
2764 /*
2765 * Check to see if the chunk is already mapped.
2766 */
2767 for (uint32_t i = 0; i < pChunk->cMappings; i++)
2768 {
2769 Assert(pChunk->paMappings[i].pGVM && pChunk->paMappings[i].MapObj != NIL_RTR0MEMOBJ);
2770 if (pChunk->paMappings[i].pGVM == pGVM)
2771 {
2772 *ppvR3 = RTR0MemObjAddressR3(pChunk->paMappings[i].MapObj);
2773 Log(("gmmR0MapChunk: chunk %#x is already mapped at %p!\n", pChunk->Core.Key, *ppvR3));
2774 return VERR_GMM_CHUNK_ALREADY_MAPPED;
2775 }
2776 }
2777
2778 /*
2779 * Do the mapping.
2780 */
2781 RTR0MEMOBJ MapObj;
2782 int rc = RTR0MemObjMapUser(&MapObj, pChunk->MemObj, (RTR3PTR)-1, 0, RTMEM_PROT_READ | RTMEM_PROT_WRITE, NIL_RTR0PROCESS);
2783 if (RT_SUCCESS(rc))
2784 {
2785 /* reallocate the array? */
2786 if ((pChunk->cMappings & 1 /*7*/) == 0)
2787 {
2788 void *pvMappings = RTMemRealloc(pChunk->paMappings, (pChunk->cMappings + 2 /*8*/) * sizeof(pChunk->paMappings[0]));
2789 if (RT_UNLIKELY(!pvMappings))
2790 {
2791 rc = RTR0MemObjFree(MapObj, false /* fFreeMappings (NA) */);
2792 AssertRC(rc);
2793 return VERR_NO_MEMORY;
2794 }
2795 pChunk->paMappings = (PGMMCHUNKMAP)pvMappings;
2796 }
2797
2798 /* insert new entry */
2799 pChunk->paMappings[pChunk->cMappings].MapObj = MapObj;
2800 pChunk->paMappings[pChunk->cMappings].pGVM = pGVM;
2801 pChunk->cMappings++;
2802
2803 *ppvR3 = RTR0MemObjAddressR3(MapObj);
2804 }
2805
2806 return rc;
2807}
2808
2809
2810/**
2811 * Map a chunk and/or unmap another chunk.
2812 *
2813 * The mapping and unmapping applies to the current process.
2814 *
2815 * This API does two things because it saves a kernel call per mapping when
2816 * when the ring-3 mapping cache is full.
2817 *
2818 * @returns VBox status code.
2819 * @param pVM The VM.
2820 * @param idCpu VCPU id
2821 * @param idChunkMap The chunk to map. NIL_GMM_CHUNKID if nothing to map.
2822 * @param idChunkUnmap The chunk to unmap. NIL_GMM_CHUNKID if nothing to unmap.
2823 * @param ppvR3 Where to store the address of the mapped chunk. NULL is ok if nothing to map.
2824 * @thread EMT
2825 */
2826GMMR0DECL(int) GMMR0MapUnmapChunk(PVM pVM, unsigned idCpu, uint32_t idChunkMap, uint32_t idChunkUnmap, PRTR3PTR ppvR3)
2827{
2828 LogFlow(("GMMR0MapUnmapChunk: pVM=%p idChunkMap=%#x idChunkUnmap=%#x ppvR3=%p\n",
2829 pVM, idChunkMap, idChunkUnmap, ppvR3));
2830
2831 /*
2832 * Validate input and get the basics.
2833 */
2834 PGMM pGMM;
2835 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2836 PGVM pGVM = GVMMR0ByVM(pVM);
2837 if (RT_UNLIKELY(!pGVM))
2838 return VERR_INVALID_PARAMETER;
2839 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
2840 return VERR_NOT_OWNER;
2841
2842 AssertCompile(NIL_GMM_CHUNKID == 0);
2843 AssertMsgReturn(idChunkMap <= GMM_CHUNKID_LAST, ("%#x\n", idChunkMap), VERR_INVALID_PARAMETER);
2844 AssertMsgReturn(idChunkUnmap <= GMM_CHUNKID_LAST, ("%#x\n", idChunkUnmap), VERR_INVALID_PARAMETER);
2845
2846 if ( idChunkMap == NIL_GMM_CHUNKID
2847 && idChunkUnmap == NIL_GMM_CHUNKID)
2848 return VERR_INVALID_PARAMETER;
2849
2850 if (idChunkMap != NIL_GMM_CHUNKID)
2851 {
2852 AssertPtrReturn(ppvR3, VERR_INVALID_POINTER);
2853 *ppvR3 = NIL_RTR3PTR;
2854 }
2855
2856 /*
2857 * Take the semaphore and do the work.
2858 *
2859 * The unmapping is done last since it's easier to undo a mapping than
2860 * undoing an unmapping. The ring-3 mapping cache cannot not be so big
2861 * that it pushes the user virtual address space to within a chunk of
2862 * it it's limits, so, no problem here.
2863 */
2864 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2865 AssertRC(rc);
2866
2867 PGMMCHUNK pMap = NULL;
2868 if (idChunkMap != NIL_GVM_HANDLE)
2869 {
2870 pMap = gmmR0GetChunk(pGMM, idChunkMap);
2871 if (RT_LIKELY(pMap))
2872 rc = gmmR0MapChunk(pGMM, pGVM, pMap, ppvR3);
2873 else
2874 {
2875 Log(("GMMR0MapUnmapChunk: idChunkMap=%#x\n", idChunkMap));
2876 rc = VERR_GMM_CHUNK_NOT_FOUND;
2877 }
2878 }
2879
2880 if ( idChunkUnmap != NIL_GMM_CHUNKID
2881 && RT_SUCCESS(rc))
2882 {
2883 PGMMCHUNK pUnmap = gmmR0GetChunk(pGMM, idChunkUnmap);
2884 if (RT_LIKELY(pUnmap))
2885 rc = gmmR0UnmapChunk(pGMM, pGVM, pUnmap);
2886 else
2887 {
2888 Log(("GMMR0MapUnmapChunk: idChunkUnmap=%#x\n", idChunkUnmap));
2889 rc = VERR_GMM_CHUNK_NOT_FOUND;
2890 }
2891
2892 if (RT_FAILURE(rc) && pMap)
2893 gmmR0UnmapChunk(pGMM, pGVM, pMap);
2894 }
2895
2896 RTSemFastMutexRelease(pGMM->Mtx);
2897
2898 LogFlow(("GMMR0MapUnmapChunk: returns %Rrc\n", rc));
2899 return rc;
2900}
2901
2902
2903/**
2904 * VMMR0 request wrapper for GMMR0MapUnmapChunk.
2905 *
2906 * @returns see GMMR0MapUnmapChunk.
2907 * @param pVM Pointer to the shared VM structure.
2908 * @param idCpu VCPU id
2909 * @param pReq The request packet.
2910 */
2911GMMR0DECL(int) GMMR0MapUnmapChunkReq(PVM pVM, unsigned idCpu, PGMMMAPUNMAPCHUNKREQ pReq)
2912{
2913 /*
2914 * Validate input and pass it on.
2915 */
2916 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2917 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2918 AssertMsgReturn(pReq->Hdr.cbReq == sizeof(*pReq), ("%#x != %#x\n", pReq->Hdr.cbReq, sizeof(*pReq)), VERR_INVALID_PARAMETER);
2919
2920 return GMMR0MapUnmapChunk(pVM, idCpu, pReq->idChunkMap, pReq->idChunkUnmap, &pReq->pvR3);
2921}
2922
2923
2924/**
2925 * Legacy mode API for supplying pages.
2926 *
2927 * The specified user address points to a allocation chunk sized block that
2928 * will be locked down and used by the GMM when the GM asks for pages.
2929 *
2930 * @returns VBox status code.
2931 * @param pVM The VM.
2932 * @param idCpu VCPU id
2933 * @param pvR3 Pointer to the chunk size memory block to lock down.
2934 */
2935GMMR0DECL(int) GMMR0SeedChunk(PVM pVM, unsigned idCpu, RTR3PTR pvR3)
2936{
2937 /*
2938 * Validate input and get the basics.
2939 */
2940 PGMM pGMM;
2941 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2942 PGVM pGVM = GVMMR0ByVM(pVM);
2943 if (RT_UNLIKELY(!pGVM))
2944 return VERR_INVALID_PARAMETER;
2945 if (RT_UNLIKELY(pGVM->aCpus[idCpu].hEMT != RTThreadNativeSelf()))
2946 return VERR_NOT_OWNER;
2947
2948 AssertPtrReturn(pvR3, VERR_INVALID_POINTER);
2949 AssertReturn(!(PAGE_OFFSET_MASK & pvR3), VERR_INVALID_POINTER);
2950
2951 if (!pGMM->fLegacyAllocationMode)
2952 {
2953 Log(("GMMR0SeedChunk: not in legacy allocation mode!\n"));
2954 return VERR_NOT_SUPPORTED;
2955 }
2956
2957 /*
2958 * Lock the memory before taking the semaphore.
2959 */
2960 RTR0MEMOBJ MemObj;
2961 int rc = RTR0MemObjLockUser(&MemObj, pvR3, GMM_CHUNK_SIZE, NIL_RTR0PROCESS);
2962 if (RT_SUCCESS(rc))
2963 {
2964 /*
2965 * Add a new chunk with our hGVM.
2966 */
2967 rc = gmmR0RegisterChunk(pGMM, &pGMM->Private, MemObj, pGVM->hSelf);
2968 if (RT_FAILURE(rc))
2969 RTR0MemObjFree(MemObj, false /* fFreeMappings */);
2970 }
2971
2972 LogFlow(("GMMR0SeedChunk: rc=%d (pvR3=%p)\n", rc, pvR3));
2973 return rc;
2974}
2975
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette