VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR0/GMMR0.cpp@ 6311

Last change on this file since 6311 was 6311, checked in by vboxsync, 17 years ago

Documentation updates.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 94.3 KB
Line 
1/* $Id: GMMR0.cpp 6311 2008-01-09 18:46:06Z vboxsync $ */
2/** @file
3 * GMM - Global Memory Manager.
4 */
5
6/*
7 * Copyright (C) 2007 innotek GmbH
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18
19/** @page pg_gmm GMM - The Global Memory Manager
20 *
21 * As the name indicates, this component is responsible for global memory
22 * management. Currently only guest RAM is allocated from the GMM, but this
23 * may change to include shadow page tables and other bits later.
24 *
25 * Guest RAM is managed as individual pages, but allocated from the host OS
26 * in chunks for reasons of portability / efficiency. To minimize the memory
27 * footprint all tracking structure must be as small as possible without
28 * unnecessary performance penalties.
29 *
30 * The allocation chunks has fixed sized, the size defined at compile time
31 * by the #GMM_CHUNK_SIZE \#define.
32 *
33 * Each chunk is given an unquie ID. Each page also has a unique ID. The
34 * relation ship between the two IDs is:
35 * @code
36 * GMM_CHUNK_SHIFT = log2(GMM_CHUNK_SIZE / PAGE_SIZE);
37 * idPage = (idChunk << GMM_CHUNK_SHIFT) | iPage;
38 * @endcode
39 * Where iPage is the index of the page within the chunk. This ID scheme
40 * permits for efficient chunk and page lookup, but it relies on the chunk size
41 * to be set at compile time. The chunks are organized in an AVL tree with their
42 * IDs being the keys.
43 *
44 * The physical address of each page in an allocation chunk is maintained by
45 * the #RTR0MEMOBJ and obtained using #RTR0MemObjGetPagePhysAddr. There is no
46 * need to duplicate this information (it'll cost 8-bytes per page if we did).
47 *
48 * So what do we need to track per page? Most importantly we need to know
49 * which state the page is in:
50 * - Private - Allocated for (eventually) backing one particular VM page.
51 * - Shared - Readonly page that is used by one or more VMs and treated
52 * as COW by PGM.
53 * - Free - Not used by anyone.
54 *
55 * For the page replacement operations (sharing, defragmenting and freeing)
56 * to be somewhat efficient, private pages needs to be associated with a
57 * particular page in a particular VM.
58 *
59 * Tracking the usage of shared pages is impractical and expensive, so we'll
60 * settle for a reference counting system instead.
61 *
62 * Free pages will be chained on LIFOs
63 *
64 * On 64-bit systems we will use a 64-bit bitfield per page, while on 32-bit
65 * systems a 32-bit bitfield will have to suffice because of address space
66 * limitations. The #GMMPAGE structure shows the details.
67 *
68 *
69 * @section sec_gmm_alloc_strat Page Allocation Strategy
70 *
71 * The strategy for allocating pages has to take fragmentation and shared
72 * pages into account, or we may end up with with 2000 chunks with only
73 * a few pages in each. Shared pages cannot easily be reallocated because
74 * of the inaccurate usage accounting (see above). Private pages can be
75 * reallocated by a defragmentation thread in the same manner that sharing
76 * is done.
77 *
78 * The first approach is to manage the free pages in two sets depending on
79 * whether they are mainly for the allocation of shared or private pages.
80 * In the initial implementation there will be almost no possibility for
81 * mixing shared and private pages in the same chunk (only if we're really
82 * stressed on memory), but when we implement forking of VMs and have to
83 * deal with lots of COW pages it'll start getting kind of interesting.
84 *
85 * The sets are lists of chunks with approximately the same number of
86 * free pages. Say the chunk size is 1MB, meaning 256 pages, and a set
87 * consists of 16 lists. So, the first list will contain the chunks with
88 * 1-7 free pages, the second covers 8-15, and so on. The chunks will be
89 * moved between the lists as pages are freed up or allocated.
90 *
91 *
92 * @section sec_gmm_costs Costs
93 *
94 * The per page cost in kernel space is 32-bit plus whatever RTR0MEMOBJ
95 * entails. In addition there is the chunk cost of approximately
96 * (sizeof(RT0MEMOBJ) + sizof(CHUNK)) / 2^CHUNK_SHIFT bytes per page.
97 *
98 * On Windows the per page #RTR0MEMOBJ cost is 32-bit on 32-bit windows
99 * and 64-bit on 64-bit windows (a PFN_NUMBER in the MDL). So, 64-bit per page.
100 * The cost on Linux is identical, but here it's because of sizeof(struct page *).
101 *
102 *
103 * @section sec_gmm_legacy Legacy Mode for Non-Tier-1 Platforms
104 *
105 * In legacy mode the page source is locked user pages and not
106 * #RTR0MemObjAllocPhysNC, this means that a page can only be allocated
107 * by the VM that locked it. We will make no attempt at implementing
108 * page sharing on these systems, just do enough to make it all work.
109 *
110 *
111 * @subsection sub_gmm_locking Serializing
112 *
113 * One simple fast mutex will be employed in the initial implementation, not
114 * two as metioned in @ref subsec_pgmPhys_Serializing.
115 *
116 * @see @ref subsec_pgmPhys_Serializing
117 *
118 *
119 * @section sec_gmm_overcommit Memory Over-Commitment Management
120 *
121 * The GVM will have to do the system wide memory over-commitment
122 * management. My current ideas are:
123 * - Per VM oc policy that indicates how much to initially commit
124 * to it and what to do in a out-of-memory situation.
125 * - Prevent overtaxing the host.
126 *
127 * There are some challenges here, the main ones are configurability and
128 * security. Should we for instance permit anyone to request 100% memory
129 * commitment? Who should be allowed to do runtime adjustments of the
130 * config. And how to prevent these settings from being lost when the last
131 * VM process exits? The solution is probably to have an optional root
132 * daemon the will keep VMMR0.r0 in memory and enable the security measures.
133 *
134 *
135 *
136 * @section sec_gmm_numa NUMA
137 *
138 * NUMA considerations will be designed and implemented a bit later.
139 *
140 * The preliminary guesses is that we will have to try allocate memory as
141 * close as possible to the CPUs the VM is executed on (EMT and additional CPU
142 * threads). Which means it's mostly about allocation and sharing policies.
143 * Both the scheduler and allocator interface will to supply some NUMA info
144 * and we'll need to have a way to calc access costs.
145 *
146 */
147
148
149/*******************************************************************************
150* Header Files *
151*******************************************************************************/
152#define LOG_GROUP LOG_GROUP_GMM
153#include <VBox/gmm.h>
154#include "GMMR0Internal.h"
155#include <VBox/gvm.h>
156#include <VBox/log.h>
157#include <VBox/param.h>
158#include <VBox/err.h>
159#include <iprt/avl.h>
160#include <iprt/mem.h>
161#include <iprt/memobj.h>
162#include <iprt/semaphore.h>
163#include <iprt/string.h>
164
165
166/*******************************************************************************
167* Structures and Typedefs *
168*******************************************************************************/
169/** Pointer to set of free chunks. */
170typedef struct GMMCHUNKFREESET *PGMMCHUNKFREESET;
171
172/** Pointer to a GMM allocation chunk. */
173typedef struct GMMCHUNK *PGMMCHUNK;
174
175/**
176 * The per-page tracking structure employed by the GMM.
177 *
178 * On 32-bit hosts we'll some trickery is necessary to compress all
179 * the information into 32-bits. When the fSharedFree member is set,
180 * the 30th bit decides whether it's a free page or not.
181 *
182 * Because of the different layout on 32-bit and 64-bit hosts, macros
183 * are used to get and set some of the data.
184 */
185typedef union GMMPAGE
186{
187#if HC_ARCH_BITS == 64
188 /** Unsigned integer view. */
189 uint64_t u;
190
191 /** The common view. */
192 struct GMMPAGECOMMON
193 {
194 uint32_t uStuff1 : 32;
195 uint32_t uStuff2 : 20;
196 /** The page state. */
197 uint32_t u2State : 2;
198 } Common;
199
200 /** The view of a private page. */
201 struct GMMPAGEPRIVATE
202 {
203 /** The guest page frame number. (Max addressable: 2 ^ 44 - 16) */
204 uint32_t pfn;
205 /** The GVM handle. (64K VMs) */
206 uint32_t hGVM : 16;
207 /** Reserved. */
208 uint32_t u16Reserved : 14;
209 /** The page state. */
210 uint32_t u2State : 2;
211 } Private;
212
213 /** The view of a shared page. */
214 struct GMMPAGESHARED
215 {
216 /** The reference count. */
217 uint32_t cRefs;
218 /** Reserved. Checksum or something? Two hGVMs for forking? */
219 uint32_t u30Reserved : 30;
220 /** The page state. */
221 uint32_t u2State : 2;
222 } Shared;
223
224 /** The view of a free page. */
225 struct GMMPAGEFREE
226 {
227 /** The index of the next page in the free list. */
228 uint32_t iNext;
229 /** Reserved. Checksum or something? */
230 uint32_t u30Reserved : 30;
231 /** The page state. */
232 uint32_t u2State : 2;
233 } Free;
234
235#else /* 32-bit */
236 /** Unsigned integer view. */
237 uint32_t u;
238
239 /** The common view. */
240 struct GMMPAGECOMMON
241 {
242 uint32_t uStuff : 30;
243 /** The page state. */
244 uint32_t u2State : 2;
245 } Common;
246
247 /** The view of a private page. */
248 struct GMMPAGEPRIVATE
249 {
250 /** The guest page frame number. (Max addressable: 2 ^ 36) */
251 uint32_t pfn : 24;
252 /** The GVM handle. (127 VMs) */
253 uint32_t hGVM : 7;
254 /** The top page state bit, MBZ. */
255 uint32_t fZero : 1;
256 } Private;
257
258 /** The view of a shared page. */
259 struct GMMPAGESHARED
260 {
261 /** The reference count. */
262 uint32_t cRefs : 30;
263 /** The page state. */
264 uint32_t u2State : 2;
265 } Shared;
266
267 /** The view of a free page. */
268 struct GMMPAGEFREE
269 {
270 /** The index of the next page in the free list. */
271 uint32_t iNext : 30;
272 /** The page state. */
273 uint32_t u2State : 2;
274 } Free;
275#endif
276} GMMPAGE;
277/** Pointer to a GMMPAGE. */
278typedef GMMPAGE *PGMMPAGE;
279
280
281/** @name The Page States.
282 * @{ */
283/** A private page. */
284#define GMM_PAGE_STATE_PRIVATE 0
285/** A private page - alternative value used on the 32-bit implemenation.
286 * This will never be used on 64-bit hosts. */
287#define GMM_PAGE_STATE_PRIVATE_32 1
288/** A shared page. */
289#define GMM_PAGE_STATE_SHARED 2
290/** A free page. */
291#define GMM_PAGE_STATE_FREE 3
292/** @} */
293
294
295/** @def GMM_PAGE_IS_PRIVATE
296 *
297 * @returns true if free, false if not.
298 * @param pPage The GMM page.
299 */
300#if HC_ARCH_BITS == 64
301# define GMM_PAGE_IS_PRIVATE(pPage) ( (pPage)->Common.u2State == GMM_PAGE_STATE_PRIVATE )
302#else
303# define GMM_PAGE_IS_PRIVATE(pPage) ( (pPage)->Private.fZero == 0 )
304#endif
305
306/** @def GMM_PAGE_IS_FREE
307 *
308 * @returns true if free, false if not.
309 * @param pPage The GMM page.
310 */
311#define GMM_PAGE_IS_SHARED(pPage) ( (pPage)->Common.u2State == GMM_PAGE_STATE_SHARED )
312
313/** @def GMM_PAGE_IS_FREE
314 *
315 * @returns true if free, false if not.
316 * @param pPage The GMM page.
317 */
318#define GMM_PAGE_IS_FREE(pPage) ( (pPage)->Common.u2State == GMM_PAGE_STATE_FREE )
319
320/** @def GMM_PAGE_PFN_END
321 * The end of the the valid guest pfn range, {0..GMM_PAGE_PFN_END-1}.
322 * @remark Some of the values outside the range has special meaning, see related \#defines.
323 */
324#if HC_ARCH_BITS == 64
325# define GMM_PAGE_PFN_END UINT32_C(0xfffffff0)
326#else
327# define GMM_PAGE_PFN_END UINT32_C(0x00fffff0)
328#endif
329
330/** @def GMM_PAGE_PFN_UNSHAREABLE
331 * Indicates that this page isn't used for normal guest memory and thus isn't shareable.
332 */
333#if HC_ARCH_BITS == 64
334# define GMM_PAGE_PFN_UNSHAREABLE UINT32_C(0xfffffff1)
335#else
336# define GMM_PAGE_PFN_UNSHAREABLE UINT32_C(0x00fffff1)
337#endif
338
339/** @def GMM_GCPHYS_END
340 * The end of the valid guest physical address as it applies to GMM pages.
341 *
342 * This must reflect the constraints imposed by the RTGCPHYS type and
343 * the guest page frame number used internally in GMMPAGE. */
344#define GMM_GCPHYS_END UINT32_C(0xfffff000)
345
346
347/**
348 * A GMM allocation chunk ring-3 mapping record.
349 *
350 * This should really be associated with a session and not a VM, but
351 * it's simpler to associated with a VM and cleanup with the VM object
352 * is destroyed.
353 */
354typedef struct GMMCHUNKMAP
355{
356 /** The mapping object. */
357 RTR0MEMOBJ MapObj;
358 /** The VM owning the mapping. */
359 PGVM pGVM;
360} GMMCHUNKMAP;
361/** Pointer to a GMM allocation chunk mapping. */
362typedef struct GMMCHUNKMAP *PGMMCHUNKMAP;
363
364
365/**
366 * A GMM allocation chunk.
367 */
368typedef struct GMMCHUNK
369{
370 /** The AVL node core.
371 * The Key is the chunk ID. */
372 AVLU32NODECORE Core;
373 /** The memory object.
374 * Either from RTR0MemObjAllocPhysNC or RTR0MemObjLockUser depending on
375 * what the host can dish up with. */
376 RTR0MEMOBJ MemObj;
377 /** Pointer to the next chunk in the free list. */
378 PGMMCHUNK pFreeNext;
379 /** Pointer to the previous chunk in the free list. */
380 PGMMCHUNK pFreePrev;
381 /** Pointer to the free set this chunk belongs to. NULL for
382 * chunks with no free pages. */
383 PGMMCHUNKFREESET pSet;
384 /** Pointer to an array of mappings. */
385 PGMMCHUNKMAP paMappings;
386 /** The number of mappings. */
387 uint16_t cMappings;
388 /** The head of the list of free pages. UINT16_MAX is the NIL value. */
389 uint16_t iFreeHead;
390 /** The number of free pages. */
391 uint16_t cFree;
392 /** The GVM handle of the VM that first allocated pages from this chunk, this
393 * is used as a preference when there are several chunks to choose from.
394 * When in legacy mode this isn't a preference any longer. */
395 uint16_t hGVM;
396 /** The number of private pages. */
397 uint16_t cPrivate;
398 /** The number of shared pages. */
399 uint16_t cShared;
400#if HC_ARCH_BITS == 64
401 /** Reserved for later. */
402 uint16_t au16Reserved[2];
403#endif
404 /** The pages. */
405 GMMPAGE aPages[GMM_CHUNK_SIZE >> PAGE_SHIFT];
406} GMMCHUNK;
407
408
409/**
410 * An allocation chunk TLB entry.
411 */
412typedef struct GMMCHUNKTLBE
413{
414 /** The chunk id. */
415 uint32_t idChunk;
416 /** Pointer to the chunk. */
417 PGMMCHUNK pChunk;
418} GMMCHUNKTLBE;
419/** Pointer to an allocation chunk TLB entry. */
420typedef GMMCHUNKTLBE *PGMMCHUNKTLBE;
421
422
423/** The number of entries tin the allocation chunk TLB. */
424#define GMM_CHUNKTLB_ENTRIES 32
425/** Gets the TLB entry index for the given Chunk ID. */
426#define GMM_CHUNKTLB_IDX(idChunk) ( (idChunk) & (GMM_CHUNKTLB_ENTRIES - 1) )
427
428/**
429 * An allocation chunk TLB.
430 */
431typedef struct GMMCHUNKTLB
432{
433 /** The TLB entries. */
434 GMMCHUNKTLBE aEntries[GMM_CHUNKTLB_ENTRIES];
435} GMMCHUNKTLB;
436/** Pointer to an allocation chunk TLB. */
437typedef GMMCHUNKTLB *PGMMCHUNKTLB;
438
439
440/** The number of lists in set. */
441#define GMM_CHUNK_FREE_SET_LISTS 16
442/** The GMMCHUNK::cFree shift count. */
443#define GMM_CHUNK_FREE_SET_SHIFT 4
444/** The GMMCHUNK::cFree mask for use when considering relinking a chunk. */
445#define GMM_CHUNK_FREE_SET_MASK 15
446
447/**
448 * A set of free chunks.
449 */
450typedef struct GMMCHUNKFREESET
451{
452 /** The number of free pages in the set. */
453 uint64_t cPages;
454 /** */
455 PGMMCHUNK apLists[GMM_CHUNK_FREE_SET_LISTS];
456} GMMCHUNKFREESET;
457
458
459/**
460 * The GMM instance data.
461 */
462typedef struct GMM
463{
464 /** Magic / eye catcher. GMM_MAGIC */
465 uint32_t u32Magic;
466 /** The fast mutex protecting the GMM.
467 * More fine grained locking can be implemented later if necessary. */
468 RTSEMFASTMUTEX Mtx;
469 /** The chunk tree. */
470 PAVLU32NODECORE pChunks;
471 /** The chunk TLB. */
472 GMMCHUNKTLB ChunkTLB;
473 /** The private free set. */
474 GMMCHUNKFREESET Private;
475 /** The shared free set. */
476 GMMCHUNKFREESET Shared;
477
478 /** The maximum number of pages we're allowed to allocate.
479 * @gcfgm 64-bit GMM/MaxPages Direct.
480 * @gcfgm 32-bit GMM/PctPages Relative to the number of host pages. */
481 uint64_t cMaxPages;
482 /** The number of pages that has been reserved.
483 * The deal is that cReservedPages - cOverCommittedPages <= cMaxPages. */
484 uint64_t cReservedPages;
485 /** The number of pages that we have over-committed in reservations. */
486 uint64_t cOverCommittedPages;
487 /** The number of actually allocated (committed if you like) pages. */
488 uint64_t cAllocatedPages;
489 /** The number of pages that are shared. A subset of cAllocatedPages. */
490 uint64_t cSharedPages;
491 /** The number of pages that are shared that has been left behind by
492 * VMs not doing proper cleanups. */
493 uint64_t cLeftBehindSharedPages;
494 /** The number of allocation chunks.
495 * (The number of pages we've allocated from the host can be derived from this.) */
496 uint32_t cChunks;
497 /** The number of current ballooned pages. */
498 uint64_t cBalloonedPages;
499
500 /** The legacy mode indicator.
501 * This is determined at initialization time. */
502 bool fLegacyMode;
503 /** The number of registered VMs. */
504 uint16_t cRegisteredVMs;
505
506 /** The previous allocated Chunk ID.
507 * Used as a hint to avoid scanning the whole bitmap. */
508 uint32_t idChunkPrev;
509 /** Chunk ID allocation bitmap.
510 * Bits of allocated IDs are set, free ones are cleared.
511 * The NIL id (0) is marked allocated. */
512 uint32_t bmChunkId[(GMM_CHUNKID_LAST + 32) >> 10];
513} GMM;
514/** Pointer to the GMM instance. */
515typedef GMM *PGMM;
516
517/** The value of GMM::u32Magic (Katsuhiro Otomo). */
518#define GMM_MAGIC 0x19540414
519
520
521/*******************************************************************************
522* Global Variables *
523*******************************************************************************/
524/** Pointer to the GMM instance data. */
525static PGMM g_pGMM = NULL;
526
527/** Macro for obtaining and validating the g_pGMM pointer.
528 * On failure it will return from the invoking function with the specified return value.
529 *
530 * @param pGMM The name of the pGMM variable.
531 * @param rc The return value on failure. Use VERR_INTERNAL_ERROR for
532 * VBox status codes.
533 */
534#define GMM_GET_VALID_INSTANCE(pGMM, rc) \
535 do { \
536 (pGMM) = g_pGMM; \
537 AssertPtrReturn((pGMM), (rc)); \
538 AssertMsgReturn((pGMM)->u32Magic == GMM_MAGIC, ("%p - %#x\n", (pGMM), (pGMM)->u32Magic), (rc)); \
539 } while (0)
540
541/** Macro for obtaining and validating the g_pGMM pointer, void function variant.
542 * On failure it will return from the invoking function.
543 *
544 * @param pGMM The name of the pGMM variable.
545 */
546#define GMM_GET_VALID_INSTANCE_VOID(pGMM) \
547 do { \
548 (pGMM) = g_pGMM; \
549 AssertPtrReturnVoid((pGMM)); \
550 AssertMsgReturnVoid((pGMM)->u32Magic == GMM_MAGIC, ("%p - %#x\n", (pGMM), (pGMM)->u32Magic)); \
551 } while (0)
552
553
554/*******************************************************************************
555* Internal Functions *
556*******************************************************************************/
557static DECLCALLBACK(int) gmmR0TermDestroyChunk(PAVLU32NODECORE pNode, void *pvGMM);
558static DECLCALLBACK(int) gmmR0CleanupVMScanChunk(PAVLU32NODECORE pNode, void *pvGMM);
559/*static*/ DECLCALLBACK(int) gmmR0CleanupVMDestroyChunk(PAVLU32NODECORE pNode, void *pvGVM);
560DECLINLINE(void) gmmR0LinkChunk(PGMMCHUNK pChunk, PGMMCHUNKFREESET pSet);
561DECLINLINE(void) gmmR0UnlinkChunk(PGMMCHUNK pChunk);
562static void gmmR0FreeChunk(PGMM pGMM, PGMMCHUNK pChunk);
563static void gmmR0FreeSharedPage(PGMM pGMM, uint32_t idPage, PGMMPAGE pPage);
564
565
566
567/**
568 * Initializes the GMM component.
569 *
570 * This is called when the VMMR0.r0 module is loaded and protected by the
571 * loader semaphore.
572 *
573 * @returns VBox status code.
574 */
575GMMR0DECL(int) GMMR0Init(void)
576{
577 LogFlow(("GMMInit:\n"));
578
579 /*
580 * Allocate the instance data and the lock(s).
581 */
582 PGMM pGMM = (PGMM)RTMemAllocZ(sizeof(*pGMM));
583 if (!pGMM)
584 return VERR_NO_MEMORY;
585 pGMM->u32Magic = GMM_MAGIC;
586 for (unsigned i = 0; i < RT_ELEMENTS(pGMM->ChunkTLB.aEntries); i++)
587 pGMM->ChunkTLB.aEntries[i].idChunk = NIL_GMM_CHUNKID;
588 ASMBitSet(&pGMM->bmChunkId[0], NIL_GMM_CHUNKID);
589
590 int rc = RTSemFastMutexCreate(&pGMM->Mtx);
591 if (RT_SUCCESS(rc))
592 {
593 /*
594 * Check and see if RTR0MemObjAllocPhysNC works.
595 */
596 RTR0MEMOBJ MemObj;
597 rc = RTR0MemObjAllocPhysNC(&MemObj, _64K, NIL_RTHCPHYS);
598 if (RT_SUCCESS(rc))
599 {
600 rc = RTR0MemObjFree(MemObj, true);
601 AssertRC(rc);
602 }
603 else if (rc == VERR_NOT_SUPPORTED)
604 pGMM->fLegacyMode = true;
605 else
606 SUPR0Printf("GMMR0Init: RTR0MemObjAllocPhysNC(,64K,Any) -> %d!\n", rc);
607
608 g_pGMM = pGMM;
609 LogFlow(("GMMInit: pGMM=%p fLegacy=%RTbool\n", pGMM, pGMM->fLegacyMode));
610 return VINF_SUCCESS;
611 }
612
613 RTMemFree(pGMM);
614 SUPR0Printf("GMMR0Init: failed! rc=%d\n", rc);
615 return rc;
616}
617
618
619/**
620 * Terminates the GMM component.
621 */
622GMMR0DECL(void) GMMR0Term(void)
623{
624 LogFlow(("GMMTerm:\n"));
625
626 /*
627 * Take care / be paranoid...
628 */
629 PGMM pGMM = g_pGMM;
630 if (!VALID_PTR(pGMM))
631 return;
632 if (pGMM->u32Magic != GMM_MAGIC)
633 {
634 SUPR0Printf("GMMR0Term: u32Magic=%#x\n", pGMM->u32Magic);
635 return;
636 }
637
638 /*
639 * Undo what init did and free any resources we've acquired.
640 */
641 /* Destroy the fundamentals. */
642 g_pGMM = NULL;
643 pGMM->u32Magic++;
644 RTSemFastMutexDestroy(pGMM->Mtx);
645 pGMM->Mtx = NIL_RTSEMFASTMUTEX;
646
647 /* free any chunks still hanging around. */
648 RTAvlU32Destroy(&pGMM->pChunks, gmmR0TermDestroyChunk, pGMM);
649
650 /* finally the instance data itself. */
651 RTMemFree(pGMM);
652 LogFlow(("GMMTerm: done\n"));
653}
654
655
656/**
657 * RTAvlU32Destroy callback.
658 *
659 * @returns 0
660 * @param pNode The node to destroy.
661 * @param pvGMM The GMM handle.
662 */
663static DECLCALLBACK(int) gmmR0TermDestroyChunk(PAVLU32NODECORE pNode, void *pvGMM)
664{
665 PGMMCHUNK pChunk = (PGMMCHUNK)pNode;
666
667 if (pChunk->cFree != (GMM_CHUNK_SIZE >> PAGE_SHIFT))
668 SUPR0Printf("GMMR0Term: %p/%#x: cFree=%d cPrivate=%d cShared=%d cMappings=%d\n", pChunk,
669 pChunk->Core.Key, pChunk->cFree, pChunk->cPrivate, pChunk->cShared, pChunk->cMappings);
670
671 int rc = RTR0MemObjFree(pChunk->MemObj, true /* fFreeMappings */);
672 if (RT_FAILURE(rc))
673 {
674 SUPR0Printf("GMMR0Term: %p/%#x: RTRMemObjFree(%p,true) -> %d (cMappings=%d)\n", pChunk,
675 pChunk->Core.Key, pChunk->MemObj, rc, pChunk->cMappings);
676 AssertRC(rc);
677 }
678 pChunk->MemObj = NIL_RTR0MEMOBJ;
679
680 RTMemFree(pChunk->paMappings);
681 pChunk->paMappings = NULL;
682
683 RTMemFree(pChunk);
684 NOREF(pvGMM);
685 return 0;
686}
687
688
689/**
690 * Initializes the per-VM data for the GMM.
691 *
692 * This is called from within the GVMM lock (from GVMMR0CreateVM)
693 * and should only initialize the data members so GMMR0CleanupVM
694 * can deal with them. We reserve no memory or anything here,
695 * that's done later in GMMR0InitVM.
696 *
697 * @param pGVM Pointer to the Global VM structure.
698 */
699GMMR0DECL(void) GMMR0InitPerVMData(PGVM pGVM)
700{
701 AssertCompile(RT_SIZEOFMEMB(GVM,gmm.s) <= RT_SIZEOFMEMB(GVM,gmm.padding));
702 AssertRelease(RT_SIZEOFMEMB(GVM,gmm.s) <= RT_SIZEOFMEMB(GVM,gmm.padding));
703
704 pGVM->gmm.s.enmPolicy = GMMOCPOLICY_INVALID;
705 pGVM->gmm.s.enmPriority = GMMPRIORITY_INVALID;
706 pGVM->gmm.s.fMayAllocate = false;
707}
708
709
710/**
711 * Cleans up when a VM is terminating.
712 *
713 * @param pGVM Pointer to the Global VM structure.
714 */
715GMMR0DECL(void) GMMR0CleanupVM(PGVM pGVM)
716{
717 LogFlow(("GMMR0CleanupVM: pGVM=%p:{.pVM=%p, .hSelf=%#x}\n", pGVM, pGVM->pVM, pGVM->hSelf));
718
719 PGMM pGMM;
720 GMM_GET_VALID_INSTANCE_VOID(pGMM);
721
722 int rc = RTSemFastMutexRequest(pGMM->Mtx);
723 AssertRC(rc);
724
725 /*
726 * The policy is 'INVALID' until the initial reservation
727 * request has been serviced.
728 */
729 if ( pGVM->gmm.s.enmPolicy > GMMOCPOLICY_INVALID
730 || pGVM->gmm.s.enmPolicy < GMMOCPOLICY_END)
731 {
732 /*
733 * If it's the last VM around, we can skip walking all the chunk looking
734 * for the pages owned by this VM and instead flush the whole shebang.
735 *
736 * This takes care of the eventuality that a VM has left shared page
737 * references behind (shouldn't happen of course, but you never know).
738 */
739 Assert(pGMM->cRegisteredVMs);
740 pGMM->cRegisteredVMs--;
741#if 0 /* disabled so it won't hide bugs. */
742 if (!pGMM->cRegisteredVMs)
743 {
744 RTAvlU32Destroy(&pGMM->pChunks, gmmR0CleanupVMDestroyChunk, pGMM);
745
746 for (unsigned i = 0; i < RT_ELEMENTS(pGMM->ChunkTLB.aEntries); i++)
747 {
748 pGMM->ChunkTLB.aEntries[i].idChunk = NIL_GMM_CHUNKID;
749 pGMM->ChunkTLB.aEntries[i].pChunk = NULL;
750 }
751
752 memset(&pGMM->Private, 0, sizeof(pGMM->Private));
753 memset(&pGMM->Shared, 0, sizeof(pGMM->Shared));
754
755 memset(&pGMM->bmChunkId[0], 0, sizeof(pGMM->bmChunkId));
756 ASMBitSet(&pGMM->bmChunkId[0], NIL_GMM_CHUNKID);
757
758 pGMM->cReservedPages = 0;
759 pGMM->cOverCommittedPages = 0;
760 pGMM->cAllocatedPages = 0;
761 pGMM->cSharedPages = 0;
762 pGMM->cLeftBehindSharedPages = 0;
763 pGMM->cChunks = 0;
764 pGMM->cBalloonedPages = 0;
765 }
766 else
767#endif
768 {
769 /*
770 * Walk the entire pool looking for pages that belongs to this VM
771 * and left over mappings. (This'll only catch private pages, shared
772 * pages will be 'left behind'.)
773 */
774 uint64_t cPrivatePages = pGVM->gmm.s.cPrivatePages; /* save */
775 RTAvlU32DoWithAll(&pGMM->pChunks, true /* fFromLeft */, gmmR0CleanupVMScanChunk, pGVM);
776 if (pGVM->gmm.s.cPrivatePages)
777 SUPR0Printf("GMMR0CleanupVM: hGVM=%#x has %#x private pages that cannot be found!\n", pGVM->hSelf, pGVM->gmm.s.cPrivatePages);
778 pGMM->cAllocatedPages -= cPrivatePages;
779
780 /* free empty chunks. */
781 if (cPrivatePages)
782 {
783 PGMMCHUNK pCur = pGMM->Private.apLists[RT_ELEMENTS(pGMM->Private.apLists) - 1];
784 while (pCur)
785 {
786 PGMMCHUNK pNext = pCur->pFreeNext;
787 if ( pCur->cFree == GMM_CHUNK_NUM_PAGES
788 && (!pGMM->fLegacyMode || pCur->hGVM == pGVM->hSelf))
789 gmmR0FreeChunk(pGMM, pCur);
790 pCur = pNext;
791 }
792 }
793
794 /* account for shared pages that weren't freed. */
795 if (pGVM->gmm.s.cSharedPages)
796 {
797 Assert(pGMM->cSharedPages >= pGVM->gmm.s.cSharedPages);
798 SUPR0Printf("GMMR0CleanupVM: hGVM=%#x left %#x shared pages behind!\n", pGVM->hSelf, pGVM->gmm.s.cSharedPages);
799 pGMM->cLeftBehindSharedPages += pGVM->gmm.s.cSharedPages;
800 }
801
802 /*
803 * Update the over-commitment management statistics.
804 */
805 pGMM->cReservedPages -= pGVM->gmm.s.Reserved.cBasePages
806 + pGVM->gmm.s.Reserved.cFixedPages
807 + pGVM->gmm.s.Reserved.cShadowPages;
808 switch (pGVM->gmm.s.enmPolicy)
809 {
810 case GMMOCPOLICY_NO_OC:
811 break;
812 default:
813 /** @todo Update GMM->cOverCommittedPages */
814 break;
815 }
816 }
817 }
818
819 /* zap the GVM data. */
820 pGVM->gmm.s.enmPolicy = GMMOCPOLICY_INVALID;
821 pGVM->gmm.s.enmPriority = GMMPRIORITY_INVALID;
822 pGVM->gmm.s.fMayAllocate = false;
823
824 RTSemFastMutexRelease(pGMM->Mtx);
825
826 LogFlow(("GMMR0CleanupVM: returns\n"));
827}
828
829
830/**
831 * RTAvlU32DoWithAll callback.
832 *
833 * @returns 0
834 * @param pNode The node to search.
835 * @param pvGVM Pointer to the shared VM structure.
836 */
837static DECLCALLBACK(int) gmmR0CleanupVMScanChunk(PAVLU32NODECORE pNode, void *pvGVM)
838{
839 PGMMCHUNK pChunk = (PGMMCHUNK)pNode;
840 PGVM pGVM = (PGVM)pvGVM;
841
842 /*
843 * Look for pages belonging to the VM.
844 * (Perform some internal checks while we're scanning.)
845 */
846#ifndef VBOX_STRICT
847 if (pChunk->cFree != (GMM_CHUNK_SIZE >> PAGE_SHIFT))
848#endif
849 {
850 unsigned cPrivate = 0;
851 unsigned cShared = 0;
852 unsigned cFree = 0;
853
854 uint16_t hGVM = pGVM->hSelf;
855 unsigned iPage = (GMM_CHUNK_SIZE >> PAGE_SHIFT);
856 while (iPage-- > 0)
857 if (GMM_PAGE_IS_PRIVATE(&pChunk->aPages[iPage]))
858 {
859 if (pChunk->aPages[iPage].Private.hGVM == hGVM)
860 {
861 /*
862 * Free the page.
863 *
864 * The reason for not using gmmR0FreePrivatePage here is that we
865 * must *not* cause the chunk to be freed from under us - we're in
866 * a AVL tree walk here.
867 */
868 pChunk->aPages[iPage].u = 0;
869 pChunk->aPages[iPage].Free.iNext = pChunk->iFreeHead;
870 pChunk->aPages[iPage].Free.u2State = GMM_PAGE_STATE_FREE;
871 pChunk->iFreeHead = iPage;
872 pChunk->cPrivate--;
873 if ((pChunk->cFree & GMM_CHUNK_FREE_SET_MASK) == 0)
874 {
875 gmmR0UnlinkChunk(pChunk);
876 pChunk->cFree++;
877 gmmR0LinkChunk(pChunk, pChunk->cShared ? &g_pGMM->Shared : &g_pGMM->Private);
878 }
879 else
880 pChunk->cFree++;
881 pGVM->gmm.s.cPrivatePages--;
882 cFree++;
883 }
884 else
885 cPrivate++;
886 }
887 else if (GMM_PAGE_IS_FREE(&pChunk->aPages[iPage]))
888 cFree++;
889 else
890 cShared++;
891
892 /*
893 * Did it add up?
894 */
895 if (RT_UNLIKELY( pChunk->cFree != cFree
896 || pChunk->cPrivate != cPrivate
897 || pChunk->cShared != cShared))
898 {
899 SUPR0Printf("gmmR0CleanupVMScanChunk: Chunk %p/%#x has bogus stats - free=%d/%d private=%d/%d shared=%d/%d\n",
900 pChunk->cFree, cFree, pChunk->cPrivate, cPrivate, pChunk->cShared, cShared);
901 pChunk->cFree = cFree;
902 pChunk->cPrivate = cPrivate;
903 pChunk->cShared = cShared;
904 }
905 }
906
907 /*
908 * Look for the mapping belonging to the terminating VM.
909 */
910 for (unsigned i = 0; i < pChunk->cMappings; i++)
911 if (pChunk->paMappings[i].pGVM == pGVM)
912 {
913 RTR0MEMOBJ MemObj = pChunk->paMappings[i].MapObj;
914
915 pChunk->cMappings--;
916 if (i < pChunk->cMappings)
917 pChunk->paMappings[i] = pChunk->paMappings[pChunk->cMappings];
918 pChunk->paMappings[pChunk->cMappings].pGVM = NULL;
919 pChunk->paMappings[pChunk->cMappings].MapObj = NIL_RTR0MEMOBJ;
920
921 int rc = RTR0MemObjFree(MemObj, false /* fFreeMappings (NA) */);
922 if (RT_FAILURE(rc))
923 {
924 SUPR0Printf("gmmR0CleanupVMScanChunk: %p/%#x: mapping #%x: RTRMemObjFree(%p,false) -> %d \n",
925 pChunk, pChunk->Core.Key, i, MemObj, rc);
926 AssertRC(rc);
927 }
928 break;
929 }
930
931 /*
932 * If not in legacy mode, we should reset the hGVM field
933 * if it has our handle in it.
934 */
935 if (pChunk->hGVM == pGVM->hSelf)
936 {
937 if (!g_pGMM->fLegacyMode)
938 pChunk->hGVM = NIL_GVM_HANDLE;
939 else if (pChunk->cFree != GMM_CHUNK_NUM_PAGES)
940 {
941 SUPR0Printf("gmmR0CleanupVMScanChunk: %p/%#x: cFree=%#x - it should be 0 in legacy mode!\n",
942 pChunk, pChunk->Core.Key, pChunk->cFree);
943 AssertMsgFailed(("%p/%#x: cFree=%#x - it should be 0 in legacy mode!\n", pChunk, pChunk->Core.Key, pChunk->cFree));
944
945 gmmR0UnlinkChunk(pChunk);
946 pChunk->cFree = GMM_CHUNK_NUM_PAGES;
947 gmmR0LinkChunk(pChunk, pChunk->cShared ? &g_pGMM->Shared : &g_pGMM->Private);
948 }
949 }
950
951 return 0;
952}
953
954
955/**
956 * RTAvlU32Destroy callback for GMMR0CleanupVM.
957 *
958 * @returns 0
959 * @param pNode The node (allocation chunk) to destroy.
960 * @param pvGVM Pointer to the shared VM structure.
961 */
962/*static*/ DECLCALLBACK(int) gmmR0CleanupVMDestroyChunk(PAVLU32NODECORE pNode, void *pvGVM)
963{
964 PGMMCHUNK pChunk = (PGMMCHUNK)pNode;
965 PGVM pGVM = (PGVM)pvGVM;
966
967 for (unsigned i = 0; i < pChunk->cMappings; i++)
968 {
969 if (pChunk->paMappings[i].pGVM != pGVM)
970 SUPR0Printf("gmmR0CleanupVMDestroyChunk: %p/%#x: mapping #%x: pGVM=%p exepcted %p\n", pChunk,
971 pChunk->Core.Key, i, pChunk->paMappings[i].pGVM, pGVM);
972 int rc = RTR0MemObjFree(pChunk->paMappings[i].MapObj, false /* fFreeMappings (NA) */);
973 if (RT_FAILURE(rc))
974 {
975 SUPR0Printf("gmmR0CleanupVMDestroyChunk: %p/%#x: mapping #%x: RTRMemObjFree(%p,false) -> %d \n", pChunk,
976 pChunk->Core.Key, i, pChunk->paMappings[i].MapObj, rc);
977 AssertRC(rc);
978 }
979 }
980
981 int rc = RTR0MemObjFree(pChunk->MemObj, true /* fFreeMappings */);
982 if (RT_FAILURE(rc))
983 {
984 SUPR0Printf("gmmR0CleanupVMDestroyChunk: %p/%#x: RTRMemObjFree(%p,true) -> %d (cMappings=%d)\n", pChunk,
985 pChunk->Core.Key, pChunk->MemObj, rc, pChunk->cMappings);
986 AssertRC(rc);
987 }
988 pChunk->MemObj = NIL_RTR0MEMOBJ;
989
990 RTMemFree(pChunk->paMappings);
991 pChunk->paMappings = NULL;
992
993 RTMemFree(pChunk);
994 return 0;
995}
996
997
998/**
999 * The initial resource reservations.
1000 *
1001 * This will make memory reservations according to policy and priority. If there isn't
1002 * sufficient resources available to sustain the VM this function will fail and all
1003 * future allocations requests will fail as well.
1004 *
1005 * These are just the initial reservations made very very early during the VM creation
1006 * process and will be adjusted later in the GMMR0UpdateReservation call after the
1007 * ring-3 init has completed.
1008 *
1009 * @returns VBox status code.
1010 * @retval VERR_GMM_NOT_SUFFICENT_MEMORY
1011 * @retval VERR_GMM_
1012 *
1013 * @param pVM Pointer to the shared VM structure.
1014 * @param cBasePages The number of pages that may be allocated for the base RAM and ROMs.
1015 * This does not include MMIO2 and similar.
1016 * @param cShadowPages The number of pages that may be allocated for shadow pageing structures.
1017 * @param cFixedPages The number of pages that may be allocated for fixed objects like the
1018 * hyper heap, MMIO2 and similar.
1019 * @param enmPolicy The OC policy to use on this VM.
1020 * @param enmPriority The priority in an out-of-memory situation.
1021 *
1022 * @thread The creator thread / EMT.
1023 */
1024GMMR0DECL(int) GMMR0InitialReservation(PVM pVM, uint64_t cBasePages, uint32_t cShadowPages, uint32_t cFixedPages,
1025 GMMOCPOLICY enmPolicy, GMMPRIORITY enmPriority)
1026{
1027 LogFlow(("GMMR0InitialReservation: pVM=%p cBasePages=%#llx cShadowPages=%#x cFixedPages=%#x enmPolicy=%d enmPriority=%d\n",
1028 pVM, cBasePages, cShadowPages, cFixedPages, enmPolicy, enmPriority));
1029
1030 /*
1031 * Validate, get basics and take the semaphore.
1032 */
1033 PGMM pGMM;
1034 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1035 PGVM pGVM = GVMMR0ByVM(pVM);
1036 if (!pGVM)
1037 return VERR_INVALID_PARAMETER;
1038 if (pGVM->hEMT != RTThreadNativeSelf())
1039 return VERR_NOT_OWNER;
1040
1041 AssertReturn(cBasePages, VERR_INVALID_PARAMETER);
1042 AssertReturn(cShadowPages, VERR_INVALID_PARAMETER);
1043 AssertReturn(cFixedPages, VERR_INVALID_PARAMETER);
1044 AssertReturn(enmPolicy > GMMOCPOLICY_INVALID && enmPolicy < GMMOCPOLICY_END, VERR_INVALID_PARAMETER);
1045 AssertReturn(enmPriority > GMMPRIORITY_INVALID && enmPriority < GMMPRIORITY_END, VERR_INVALID_PARAMETER);
1046
1047 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1048 AssertRC(rc);
1049
1050 if ( !pGVM->gmm.s.Reserved.cBasePages
1051 && !pGVM->gmm.s.Reserved.cFixedPages
1052 && !pGVM->gmm.s.Reserved.cShadowPages)
1053 {
1054 /*
1055 * Check if we can accomodate this.
1056 */
1057 /* ... later ... */
1058 if (RT_SUCCESS(rc))
1059 {
1060 /*
1061 * Update the records.
1062 */
1063 pGVM->gmm.s.Reserved.cBasePages = cBasePages;
1064 pGVM->gmm.s.Reserved.cFixedPages = cFixedPages;
1065 pGVM->gmm.s.Reserved.cShadowPages = cShadowPages;
1066 pGVM->gmm.s.enmPolicy = enmPolicy;
1067 pGVM->gmm.s.enmPriority = enmPriority;
1068 pGVM->gmm.s.fMayAllocate = true;
1069
1070 pGMM->cReservedPages += cBasePages + cFixedPages + cShadowPages;
1071 pGMM->cRegisteredVMs++;
1072 }
1073 }
1074 else
1075 rc = VERR_WRONG_ORDER;
1076
1077 RTSemFastMutexRelease(pGMM->Mtx);
1078 LogFlow(("GMMR0InitialReservation: returns %Rrc\n", rc));
1079 return rc;
1080}
1081
1082
1083/**
1084 * VMMR0 request wrapper for GMMR0InitialReservation.
1085 *
1086 * @returns see GMMR0InitialReservation.
1087 * @param pVM Pointer to the shared VM structure.
1088 * @param pReq The request packet.
1089 */
1090GMMR0DECL(int) GMMR0InitialReservationReq(PVM pVM, PGMMINITIALRESERVATIONREQ pReq)
1091{
1092 /*
1093 * Validate input and pass it on.
1094 */
1095 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
1096 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
1097 AssertMsgReturn(pReq->Hdr.cbReq != sizeof(*pReq), ("%#x != %#x\n", pReq->Hdr.cbReq, sizeof(*pReq)), VERR_INVALID_PARAMETER);
1098
1099 return GMMR0InitialReservation(pVM, pReq->cBasePages, pReq->cShadowPages, pReq->cFixedPages, pReq->enmPolicy, pReq->enmPriority);
1100}
1101
1102
1103/**
1104 * This updates the memory reservation with the additional MMIO2 and ROM pages.
1105 *
1106 * @returns VBox status code.
1107 * @retval VERR_GMM_NOT_SUFFICENT_MEMORY
1108 *
1109 * @param pVM Pointer to the shared VM structure.
1110 * @param cBasePages The number of pages that may be allocated for the base RAM and ROMs.
1111 * This does not include MMIO2 and similar.
1112 * @param cShadowPages The number of pages that may be allocated for shadow pageing structures.
1113 * @param cFixedPages The number of pages that may be allocated for fixed objects like the
1114 * hyper heap, MMIO2 and similar.
1115 * @param enmPolicy The OC policy to use on this VM.
1116 * @param enmPriority The priority in an out-of-memory situation.
1117 *
1118 * @thread EMT.
1119 */
1120GMMR0DECL(int) GMMR0UpdateReservation(PVM pVM, uint64_t cBasePages, uint32_t cShadowPages, uint32_t cFixedPages)
1121{
1122 LogFlow(("GMMR0UpdateReservation: pVM=%p cBasePages=%#llx cShadowPages=%#x cFixedPages=%#x\n",
1123 pVM, cBasePages, cShadowPages, cFixedPages));
1124
1125 /*
1126 * Validate, get basics and take the semaphore.
1127 */
1128 PGMM pGMM;
1129 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1130 PGVM pGVM = GVMMR0ByVM(pVM);
1131 if (!pGVM)
1132 return VERR_INVALID_PARAMETER;
1133 if (pGVM->hEMT != RTThreadNativeSelf())
1134 return VERR_NOT_OWNER;
1135
1136 AssertReturn(cBasePages, VERR_INVALID_PARAMETER);
1137 AssertReturn(cShadowPages, VERR_INVALID_PARAMETER);
1138 AssertReturn(cFixedPages, VERR_INVALID_PARAMETER);
1139
1140 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1141 AssertRC(rc);
1142
1143 if ( pGVM->gmm.s.Reserved.cBasePages
1144 && pGVM->gmm.s.Reserved.cFixedPages
1145 && pGVM->gmm.s.Reserved.cShadowPages)
1146 {
1147 /*
1148 * Check if we can accomodate this.
1149 */
1150 /* ... later ... */
1151 if (RT_SUCCESS(rc))
1152 {
1153 /*
1154 * Update the records.
1155 */
1156 pGMM->cReservedPages -= pGVM->gmm.s.Reserved.cBasePages
1157 + pGVM->gmm.s.Reserved.cFixedPages
1158 + pGVM->gmm.s.Reserved.cShadowPages;
1159 pGMM->cReservedPages += cBasePages + cFixedPages + cShadowPages;
1160
1161 pGVM->gmm.s.Reserved.cBasePages = cBasePages;
1162 pGVM->gmm.s.Reserved.cFixedPages = cFixedPages;
1163 pGVM->gmm.s.Reserved.cShadowPages = cShadowPages;
1164 }
1165 }
1166 else
1167 rc = VERR_WRONG_ORDER;
1168
1169 RTSemFastMutexRelease(pGMM->Mtx);
1170 LogFlow(("GMMR0UpdateReservation: returns %Rrc\n", rc));
1171 return rc;
1172}
1173
1174
1175/**
1176 * VMMR0 request wrapper for GMMR0UpdateReservation.
1177 *
1178 * @returns see GMMR0UpdateReservation.
1179 * @param pVM Pointer to the shared VM structure.
1180 * @param pReq The request packet.
1181 */
1182GMMR0DECL(int) GMMR0UpdateReservationReq(PVM pVM, PGMMUPDATERESERVATIONREQ pReq)
1183{
1184 /*
1185 * Validate input and pass it on.
1186 */
1187 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
1188 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
1189 AssertMsgReturn(pReq->Hdr.cbReq != sizeof(*pReq), ("%#x != %#x\n", pReq->Hdr.cbReq, sizeof(*pReq)), VERR_INVALID_PARAMETER);
1190
1191 return GMMR0UpdateReservation(pVM, pReq->cBasePages, pReq->cShadowPages, pReq->cFixedPages);
1192}
1193
1194
1195/**
1196 * Looks up a chunk in the tree and fill in the TLB entry for it.
1197 *
1198 * This is not expected to fail and will bitch if it does.
1199 *
1200 * @returns Pointer to the allocation chunk, NULL if not found.
1201 * @param pGMM Pointer to the GMM instance.
1202 * @param idChunk The ID of the chunk to find.
1203 * @param pTlbe Pointer to the TLB entry.
1204 */
1205static PGMMCHUNK gmmR0GetChunkSlow(PGMM pGMM, uint32_t idChunk, PGMMCHUNKTLBE pTlbe)
1206{
1207 PGMMCHUNK pChunk = (PGMMCHUNK)RTAvlU32Get(&pGMM->pChunks, idChunk);
1208 AssertMsgReturn(pChunk, ("Chunk %#x not found!\n", idChunk), NULL);
1209 pTlbe->idChunk = idChunk;
1210 pTlbe->pChunk = pChunk;
1211 return pChunk;
1212}
1213
1214
1215/**
1216 * Finds a allocation chunk.
1217 *
1218 * This is not expected to fail and will bitch if it does.
1219 *
1220 * @returns Pointer to the allocation chunk, NULL if not found.
1221 * @param pGMM Pointer to the GMM instance.
1222 * @param idChunk The ID of the chunk to find.
1223 */
1224DECLINLINE(PGMMCHUNK) gmmR0GetChunk(PGMM pGMM, uint32_t idChunk)
1225{
1226 /*
1227 * Do a TLB lookup, branch if not in the TLB.
1228 */
1229 PGMMCHUNKTLBE pTlbe = &pGMM->ChunkTLB.aEntries[GMM_CHUNKTLB_IDX(idChunk)];
1230 if ( pTlbe->idChunk != idChunk
1231 || !pTlbe->pChunk)
1232 return gmmR0GetChunkSlow(pGMM, idChunk, pTlbe);
1233 return pTlbe->pChunk;
1234}
1235
1236
1237/**
1238 * Finds a page.
1239 *
1240 * This is not expected to fail and will bitch if it does.
1241 *
1242 * @returns Pointer to the page, NULL if not found.
1243 * @param pGMM Pointer to the GMM instance.
1244 * @param idPage The ID of the page to find.
1245 */
1246DECLINLINE(PGMMPAGE) gmmR0GetPage(PGMM pGMM, uint32_t idPage)
1247{
1248 PGMMCHUNK pChunk = gmmR0GetChunk(pGMM, idPage >> GMM_CHUNKID_SHIFT);
1249 if (RT_LIKELY(pChunk))
1250 return &pChunk->aPages[idPage & GMM_PAGEID_IDX_MASK];
1251 return NULL;
1252}
1253
1254
1255/**
1256 * Unlinks the chunk from the free list it's currently on (if any).
1257 *
1258 * @param pChunk The allocation chunk.
1259 */
1260DECLINLINE(void) gmmR0UnlinkChunk(PGMMCHUNK pChunk)
1261{
1262 PGMMCHUNKFREESET pSet = pChunk->pSet;
1263 if (RT_LIKELY(pSet))
1264 {
1265 pSet->cPages -= pChunk->cFree;
1266
1267 PGMMCHUNK pPrev = pChunk->pFreePrev;
1268 PGMMCHUNK pNext = pChunk->pFreeNext;
1269 if (pPrev)
1270 pPrev->pFreeNext = pNext;
1271 else
1272 pSet->apLists[(pChunk->cFree - 1) >> GMM_CHUNK_FREE_SET_SHIFT] = pNext;
1273 if (pNext)
1274 pNext->pFreePrev = pPrev;
1275
1276 pChunk->pSet = NULL;
1277 pChunk->pFreeNext = NULL;
1278 pChunk->pFreePrev = NULL;
1279 }
1280 else
1281 {
1282 Assert(!pChunk->pFreeNext);
1283 Assert(!pChunk->pFreePrev);
1284 Assert(!pChunk->cFree);
1285 }
1286}
1287
1288
1289/**
1290 * Links the chunk onto the appropriate free list in the specified free set.
1291 *
1292 * If no free entries, it's not linked into any list.
1293 *
1294 * @param pChunk The allocation chunk.
1295 * @param pSet The free set.
1296 */
1297DECLINLINE(void) gmmR0LinkChunk(PGMMCHUNK pChunk, PGMMCHUNKFREESET pSet)
1298{
1299 Assert(!pChunk->pSet);
1300 Assert(!pChunk->pFreeNext);
1301 Assert(!pChunk->pFreePrev);
1302
1303 if (pChunk->cFree > 0)
1304 {
1305 pChunk->pFreePrev = NULL;
1306 unsigned iList = (pChunk->cFree - 1) >> GMM_CHUNK_FREE_SET_SHIFT;
1307 pChunk->pFreeNext = pSet->apLists[iList];
1308 pSet->apLists[iList] = pChunk;
1309
1310 pSet->cPages += pChunk->cFree;
1311 }
1312}
1313
1314
1315/**
1316 * Frees a Chunk ID.
1317 *
1318 * @param pGMM Pointer to the GMM instance.
1319 * @param idChunk The Chunk ID to free.
1320 */
1321static void gmmR0FreeChunkId(PGMM pGMM, uint32_t idChunk)
1322{
1323 Assert(idChunk != NIL_GMM_CHUNKID);
1324 Assert(ASMBitTest(&pGMM->bmChunkId[0], idChunk));
1325 ASMAtomicBitClear(&pGMM->bmChunkId[0], idChunk);
1326}
1327
1328
1329/**
1330 * Allocates a new Chunk ID.
1331 *
1332 * @returns The Chunk ID.
1333 * @param pGMM Pointer to the GMM instance.
1334 */
1335static uint32_t gmmR0AllocateChunkId(PGMM pGMM)
1336{
1337 AssertCompile(!((GMM_CHUNKID_LAST + 1) & 31)); /* must be a multiple of 32 */
1338 AssertCompile(NIL_GMM_CHUNKID == 0);
1339
1340 /*
1341 * Try the next sequential one.
1342 */
1343 int32_t idChunk = ++pGMM->idChunkPrev;
1344#if 0 /* test the fallback first */
1345 if ( idChunk <= GMM_CHUNKID_LAST
1346 && idChunk > NIL_GMM_CHUNKID
1347 && !ASMAtomicBitTestAndSet(&pVMM->bmChunkId[0], idChunk))
1348 return idChunk;
1349#endif
1350
1351 /*
1352 * Scan sequentially from the last one.
1353 */
1354 if ( (uint32_t)idChunk < GMM_CHUNKID_LAST
1355 && idChunk > NIL_GMM_CHUNKID)
1356 {
1357 idChunk = ASMBitNextClear(&pGMM->bmChunkId[0], GMM_CHUNKID_LAST + 1, idChunk);
1358 if (idChunk > NIL_GMM_CHUNKID)
1359 return pGMM->idChunkPrev = idChunk;
1360 }
1361
1362 /*
1363 * Ok, scan from the start.
1364 * We're not racing anyone, so there is no need to expect failures or have restart loops.
1365 */
1366 idChunk = ASMBitFirstClear(&pGMM->bmChunkId[0], GMM_CHUNKID_LAST + 1);
1367 AssertMsgReturn(idChunk > NIL_GMM_CHUNKID, ("%d\n", idChunk), NIL_GVM_HANDLE);
1368 AssertMsgReturn(!ASMAtomicBitTestAndSet(&pGMM->bmChunkId[0], idChunk), ("%d\n", idChunk), NIL_GVM_HANDLE);
1369
1370 return pGMM->idChunkPrev = idChunk;
1371}
1372
1373
1374/**
1375 * Registers a new chunk of memory.
1376 *
1377 * This is called by both gmmR0AllocateOneChunk and GMMR0SeedChunk.
1378 *
1379 * @returns VBox status code.
1380 * @param pGMM Pointer to the GMM instance.
1381 * @param pSet Pointer to the set.
1382 * @param MemObj The memory object for the chunk.
1383 * @param hGVM The hGVM value. (Only used by GMMR0SeedChunk.)
1384 */
1385static int gmmR0RegisterChunk(PGMM pGMM, PGMMCHUNKFREESET pSet, RTR0MEMOBJ MemObj, uint16_t hGVM)
1386{
1387 int rc;
1388 PGMMCHUNK pChunk = (PGMMCHUNK)RTMemAllocZ(sizeof(*pChunk));
1389 if (pChunk)
1390 {
1391 /*
1392 * Initialize it.
1393 */
1394 pChunk->MemObj = MemObj;
1395 pChunk->cFree = GMM_CHUNK_NUM_PAGES;
1396 pChunk->hGVM = hGVM;
1397 pChunk->iFreeHead = 0;
1398 for (unsigned iPage = 0; iPage < RT_ELEMENTS(pChunk->aPages) - 1; iPage++)
1399 {
1400 pChunk->aPages[iPage].Free.u2State = GMM_PAGE_STATE_FREE;
1401 pChunk->aPages[iPage].Free.iNext = iPage + 1;
1402 }
1403 pChunk->aPages[RT_ELEMENTS(pChunk->aPages) - 1].Free.u2State = GMM_PAGE_STATE_FREE;
1404 pChunk->aPages[RT_ELEMENTS(pChunk->aPages) - 1].Free.iNext = UINT32_MAX;
1405
1406 /*
1407 * Allocate a Chunk ID and insert it into the tree.
1408 * It doesn't cost anything to be careful here.
1409 */
1410 pChunk->Core.Key = gmmR0AllocateChunkId(pGMM);
1411 if ( pChunk->Core.Key != NIL_GMM_CHUNKID
1412 && pChunk->Core.Key <= GMM_CHUNKID_LAST
1413 && RTAvlU32Insert(&pGMM->pChunks, &pChunk->Core))
1414 {
1415 pGMM->cChunks++;
1416 gmmR0LinkChunk(pChunk, pSet);
1417 return VINF_SUCCESS;
1418 }
1419
1420 rc = VERR_INTERNAL_ERROR;
1421 RTMemFree(pChunk);
1422 }
1423 else
1424 rc = VERR_NO_MEMORY;
1425 return rc;
1426}
1427
1428
1429/**
1430 * Allocate one new chunk and add it to the specified free set.
1431 *
1432 * @returns VBox status code.
1433 * @param pGMM Pointer to the GMM instance.
1434 * @param pSet Pointer to the set.
1435 */
1436static int gmmR0AllocateOneChunk(PGMM pGMM, PGMMCHUNKFREESET pSet)
1437{
1438 /*
1439 * Allocate the memory.
1440 */
1441 RTR0MEMOBJ MemObj;
1442 int rc = RTR0MemObjAllocPhysNC(&MemObj, GMM_CHUNK_SIZE, NIL_RTHCPHYS);
1443 if (RT_SUCCESS(rc))
1444 {
1445 rc = gmmR0RegisterChunk(pGMM, pSet, MemObj, NIL_GVM_HANDLE);
1446 if (RT_FAILURE(rc))
1447 RTR0MemObjFree(MemObj, false /* fFreeMappings */);
1448 }
1449 return rc;
1450}
1451
1452
1453/**
1454 * Attempts to allocate more pages until the requested amount is met.
1455 *
1456 * @returns VBox status code.
1457 * @param pGMM Pointer to the GMM instance data.
1458 * @param pSet Pointer to the free set to grow.
1459 * @param cPages The number of pages needed.
1460 */
1461static int gmmR0AllocateMoreChunks(PGMM pGMM, PGMMCHUNKFREESET pSet, uint32_t cPages)
1462{
1463 Assert(!pGMM->fLegacyMode);
1464
1465 /*
1466 * Try steal free chunks from the other set first. (Only take 100% free chunks.)
1467 */
1468 PGMMCHUNKFREESET pOtherSet = pSet == &pGMM->Private ? &pGMM->Shared : &pGMM->Private;
1469 while ( pSet->cPages < cPages
1470 && pOtherSet->cPages >= GMM_CHUNK_NUM_PAGES)
1471 {
1472 PGMMCHUNK pChunk = pOtherSet->apLists[GMM_CHUNK_FREE_SET_LISTS - 1];
1473 while (pChunk && pChunk->cFree != GMM_CHUNK_NUM_PAGES)
1474 pChunk = pChunk->pFreeNext;
1475 if (!pChunk)
1476 break;
1477
1478 gmmR0UnlinkChunk(pChunk);
1479 gmmR0LinkChunk(pChunk, pSet);
1480 }
1481
1482 /*
1483 * If we need still more pages, allocate new chunks.
1484 */
1485 while (pSet->cPages < cPages)
1486 {
1487 int rc = gmmR0AllocateOneChunk(pGMM, pSet);
1488 if (RT_FAILURE(rc))
1489 return rc;
1490 }
1491
1492 return VINF_SUCCESS;
1493}
1494
1495
1496/**
1497 * Allocates one page.
1498 *
1499 * Worker for gmmR0AllocatePages.
1500 *
1501 * @param pGMM Pointer to the GMM instance data.
1502 * @param hGVM The GVM handle of the VM requesting memory.
1503 * @param pChunk The chunk to allocate it from.
1504 * @param pPageDesc The page descriptor.
1505 */
1506static void gmmR0AllocatePage(PGMM pGMM, uint32_t hGVM, PGMMCHUNK pChunk, PGMMPAGEDESC pPageDesc)
1507{
1508 /* update the chunk stats. */
1509 if (pChunk->hGVM == NIL_GVM_HANDLE)
1510 pChunk->hGVM = hGVM;
1511 Assert(pChunk->cFree);
1512 pChunk->cFree--;
1513
1514 /* unlink the first free page. */
1515 const uint32_t iPage = pChunk->iFreeHead;
1516 AssertReleaseMsg(iPage < RT_ELEMENTS(pChunk->aPages), ("%d\n", iPage));
1517 PGMMPAGE pPage = &pChunk->aPages[iPage];
1518 Assert(GMM_PAGE_IS_FREE(pPage));
1519 pChunk->iFreeHead = pPage->Free.iNext;
1520
1521 /* make the page private. */
1522 pPage->u = 0;
1523 AssertCompile(GMM_PAGE_STATE_PRIVATE == 0);
1524 pPage->Private.hGVM = hGVM;
1525 AssertCompile(NIL_RTHCPHYS >= GMM_GCPHYS_END);
1526 AssertCompile(GMM_GCPHYS_UNSHAREABLE >= GMM_GCPHYS_END);
1527 if (pPageDesc->HCPhysGCPhys < GMM_GCPHYS_END)
1528 pPage->Private.pfn = pPageDesc->HCPhysGCPhys >> PAGE_SHIFT;
1529 else
1530 pPage->Private.pfn = GMM_PAGE_PFN_UNSHAREABLE; /* unshareable / unassigned - same thing. */
1531
1532 /* update the page descriptor. */
1533 pPageDesc->HCPhysGCPhys = RTR0MemObjGetPagePhysAddr(pChunk->MemObj, iPage);
1534 Assert(pPageDesc->HCPhysGCPhys != NIL_RTHCPHYS);
1535 pPageDesc->idPage = (pChunk->Core.Key << GMM_CHUNKID_SHIFT) | iPage;
1536 pPageDesc->idSharedPage = NIL_GMM_PAGEID;
1537}
1538
1539
1540/**
1541 * Common worker for GMMR0AllocateHandyPages and GMMR0AllocatePages.
1542 *
1543 * @returns VBox status code:
1544 * @retval xxx
1545 *
1546 * @param pGMM Pointer to the GMM instance data.
1547 * @param pGVM Pointer to the shared VM structure.
1548 * @param cPages The number of pages to allocate.
1549 * @param paPages Pointer to the page descriptors.
1550 * See GMMPAGEDESC for details on what is expected on input.
1551 * @param enmAccount The account to charge.
1552 */
1553static int gmmR0AllocatePages(PGMM pGMM, PGVM pGVM, uint32_t cPages, PGMMPAGEDESC paPages, GMMACCOUNT enmAccount)
1554{
1555 /*
1556 * Check allocation limits.
1557 */
1558 if (RT_UNLIKELY(pGMM->cAllocatedPages + cPages > pGMM->cMaxPages))
1559 return VERR_GMM_HIT_GLOBAL_LIMIT;
1560
1561 switch (enmAccount)
1562 {
1563 case GMMACCOUNT_BASE:
1564 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cBasePages + cPages > pGVM->gmm.s.Reserved.cBasePages))
1565 {
1566 Log(("gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
1567 pGVM->gmm.s.Reserved.cBasePages, pGVM->gmm.s.Allocated.cBasePages, cPages));
1568 return VERR_GMM_HIT_VM_ACCOUNT_LIMIT;
1569 }
1570 break;
1571 case GMMACCOUNT_SHADOW:
1572 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cShadowPages + cPages > pGVM->gmm.s.Reserved.cShadowPages))
1573 {
1574 Log(("gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
1575 pGVM->gmm.s.Reserved.cShadowPages, pGVM->gmm.s.Allocated.cShadowPages, cPages));
1576 return VERR_GMM_HIT_VM_ACCOUNT_LIMIT;
1577 }
1578 break;
1579 case GMMACCOUNT_FIXED:
1580 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cFixedPages + cPages > pGVM->gmm.s.Reserved.cFixedPages))
1581 {
1582 Log(("gmmR0AllocatePages: Reserved=%#llx Allocated+Requested=%#llx+%#x!\n",
1583 pGVM->gmm.s.Reserved.cFixedPages, pGVM->gmm.s.Allocated.cFixedPages, cPages));
1584 return VERR_GMM_HIT_VM_ACCOUNT_LIMIT;
1585 }
1586 break;
1587 default:
1588 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
1589 }
1590
1591 /*
1592 * Check if we need to allocate more memory or not. In legacy mode this is
1593 * a bit extra work but it's easier to do it upfront than bailing out later.
1594 */
1595 PGMMCHUNKFREESET pSet = &pGMM->Private;
1596 if (pSet->cPages < cPages)
1597 {
1598 if (pGMM->fLegacyMode)
1599 return VERR_GMM_SEED_ME;
1600
1601 int rc = gmmR0AllocateMoreChunks(pGMM, pSet, cPages);
1602 if (RT_FAILURE(rc))
1603 return rc;
1604 Assert(pSet->cPages >= cPages);
1605 }
1606 else if (pGMM->fLegacyMode)
1607 {
1608 uint16_t hGVM = pGVM->hSelf;
1609 uint32_t cPagesFound = 0;
1610 for (unsigned i = 0; i < RT_ELEMENTS(pSet->apLists); i++)
1611 for (PGMMCHUNK pCur = pSet->apLists[i]; pCur; pCur = pCur->pFreeNext)
1612 if (pCur->hGVM == hGVM)
1613 {
1614 cPagesFound += pCur->cFree;
1615 if (cPagesFound >= cPages)
1616 break;
1617 }
1618 if (cPagesFound < cPages)
1619 return VERR_GMM_SEED_ME;
1620 }
1621
1622 /*
1623 * Pick the pages.
1624 */
1625 uint16_t hGVM = pGVM->hSelf;
1626 uint32_t iPage = 0;
1627 for (unsigned i = 0; i < RT_ELEMENTS(pSet->apLists) && iPage < cPages; i++)
1628 {
1629 /* first round, pick from chunks with an affinity to the VM. */
1630 PGMMCHUNK pCur = pSet->apLists[i];
1631 while (pCur && iPage < cPages)
1632 {
1633 PGMMCHUNK pNext = pCur->pFreeNext;
1634
1635 if ( pCur->hGVM == hGVM
1636 && ( pCur->cFree < GMM_CHUNK_NUM_PAGES
1637 || pGMM->fLegacyMode))
1638 {
1639 gmmR0UnlinkChunk(pCur);
1640 for (; pCur->cFree && iPage < cPages; iPage++)
1641 gmmR0AllocatePage(pGMM, hGVM, pCur, &paPages[iPage]);
1642 gmmR0LinkChunk(pCur, pSet);
1643 }
1644
1645 pCur = pNext;
1646 }
1647
1648 /* second round, take all free pages in this list. */
1649 if (!pGMM->fLegacyMode)
1650 {
1651 PGMMCHUNK pCur = pSet->apLists[i];
1652 while (pCur && iPage < cPages)
1653 {
1654 PGMMCHUNK pNext = pCur->pFreeNext;
1655
1656 gmmR0UnlinkChunk(pCur);
1657 for (; pCur->cFree && iPage < cPages; iPage++)
1658 gmmR0AllocatePage(pGMM, hGVM, pCur, &paPages[iPage]);
1659 gmmR0LinkChunk(pCur, pSet);
1660
1661 pCur = pNext;
1662 }
1663 }
1664 }
1665
1666 /*
1667 * Update the account.
1668 */
1669 switch (enmAccount)
1670 {
1671 case GMMACCOUNT_BASE: pGVM->gmm.s.Allocated.cBasePages += iPage;
1672 case GMMACCOUNT_SHADOW: pGVM->gmm.s.Allocated.cShadowPages += iPage;
1673 case GMMACCOUNT_FIXED: pGVM->gmm.s.Allocated.cFixedPages += iPage;
1674 default:
1675 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
1676 }
1677 pGVM->gmm.s.cPrivatePages += iPage;
1678 pGMM->cAllocatedPages += iPage;
1679
1680 AssertMsgReturn(iPage == cPages, ("%d != %d\n", iPage, cPages), VERR_INTERNAL_ERROR);
1681
1682 /*
1683 * Check if we've reached some threshold and should kick one or two VMs and tell
1684 * them to inflate their balloons a bit more... later.
1685 */
1686
1687 return VINF_SUCCESS;
1688}
1689
1690
1691/**
1692 * Updates the previous allocations and allocates more pages.
1693 *
1694 * The handy pages are always taken from the 'base' memory account.
1695 *
1696 * @returns VBox status code:
1697 * @retval xxx
1698 *
1699 * @param pVM Pointer to the shared VM structure.
1700 * @param cPagesToUpdate The number of pages to update (starting from the head).
1701 * @param cPagesToAlloc The number of pages to allocate (starting from the head).
1702 * @param paPages The array of page descriptors.
1703 * See GMMPAGEDESC for details on what is expected on input.
1704 * @thread EMT.
1705 */
1706GMMR0DECL(int) GMMR0AllocateHandyPages(PVM pVM, uint32_t cPagesToUpdate, uint32_t cPagesToAlloc, PGMMPAGEDESC paPages)
1707{
1708 LogFlow(("GMMR0AllocateHandyPages: pVM=%p cPagesToUpdate=%#x cPagesToAlloc=%#x paPages=%p\n",
1709 pVM, cPagesToUpdate, cPagesToAlloc, paPages));
1710
1711 /*
1712 * Validate, get basics and take the semaphore.
1713 * (This is a relatively busy path, so make predictions where possible.)
1714 */
1715 PGMM pGMM;
1716 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1717 PGVM pGVM = GVMMR0ByVM(pVM);
1718 if (RT_UNLIKELY(!pGVM))
1719 return VERR_INVALID_PARAMETER;
1720 if (RT_UNLIKELY(pGVM->hEMT != RTThreadNativeSelf()))
1721 return VERR_NOT_OWNER;
1722
1723 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
1724 AssertMsgReturn( (cPagesToUpdate && cPagesToUpdate < 1024)
1725 || (cPagesToAlloc && cPagesToAlloc < 1024),
1726 ("cPagesToUpdate=%#x cPagesToAlloc=%#x\n", cPagesToUpdate, cPagesToAlloc),
1727 VERR_INVALID_PARAMETER);
1728
1729 unsigned iPage = 0;
1730 for (; iPage < cPagesToUpdate; iPage++)
1731 {
1732 AssertMsgReturn( ( paPages[iPage].HCPhysGCPhys < GMM_GCPHYS_END
1733 && !(paPages[iPage].HCPhysGCPhys & PAGE_OFFSET_MASK))
1734 || paPages[iPage].HCPhysGCPhys == NIL_RTHCPHYS
1735 || paPages[iPage].HCPhysGCPhys == GMM_GCPHYS_UNSHAREABLE,
1736 ("#%#x: %RHp\n", iPage, paPages[iPage].HCPhysGCPhys),
1737 VERR_INVALID_PARAMETER);
1738 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
1739 /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
1740 ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
1741 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
1742 /*|| paPages[iPage].idSharedPage == NIL_GMM_PAGEID*/,
1743 ("#%#x: %#x\n", iPage, paPages[iPage].idSharedPage), VERR_INVALID_PARAMETER);
1744 }
1745
1746 for (; iPage < cPagesToAlloc; iPage++)
1747 {
1748 AssertMsgReturn(paPages[iPage].HCPhysGCPhys == NIL_RTHCPHYS, ("#%#x: %RHp\n", iPage, paPages[iPage].HCPhysGCPhys), VERR_INVALID_PARAMETER);
1749 AssertMsgReturn(paPages[iPage].idPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
1750 AssertMsgReturn(paPages[iPage].idSharedPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idSharedPage), VERR_INVALID_PARAMETER);
1751 }
1752
1753 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1754 AssertRC(rc);
1755
1756 /* No allocations before the initial reservation has been made! */
1757 if (RT_LIKELY( pGVM->gmm.s.Reserved.cBasePages
1758 && pGVM->gmm.s.Reserved.cFixedPages
1759 && pGVM->gmm.s.Reserved.cShadowPages))
1760 {
1761 /*
1762 * Perform the updates.
1763 * Stop on the first error.
1764 */
1765 for (iPage = 0; iPage < cPagesToUpdate; iPage++)
1766 {
1767 if (paPages[iPage].idPage != NIL_GMM_PAGEID)
1768 {
1769 PGMMPAGE pPage = gmmR0GetPage(pGMM, paPages[iPage].idPage);
1770 if (RT_LIKELY(pPage))
1771 {
1772 if (RT_LIKELY(GMM_PAGE_IS_PRIVATE(pPage)))
1773 {
1774 if (RT_LIKELY(pPage->Private.hGVM == pGVM->hSelf))
1775 {
1776 AssertCompile(NIL_RTHCPHYS > GMM_GCPHYS_END && GMM_GCPHYS_UNSHAREABLE > GMM_GCPHYS_END);
1777 if (RT_LIKELY(paPages[iPage].HCPhysGCPhys < GMM_GCPHYS_END))
1778 pPage->Private.pfn = paPages[iPage].HCPhysGCPhys >> PAGE_SHIFT;
1779 else if (paPages[iPage].HCPhysGCPhys == GMM_GCPHYS_UNSHAREABLE)
1780 pPage->Private.pfn = GMM_PAGE_PFN_UNSHAREABLE;
1781 /* else: NIL_RTHCPHYS nothing */
1782
1783 paPages[iPage].idPage = NIL_GMM_PAGEID;
1784 paPages[iPage].HCPhysGCPhys = NIL_RTHCPHYS;
1785 }
1786 else
1787 {
1788 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not owner! hGVM=%#x hSelf=%#x\n",
1789 iPage, paPages[iPage].idPage, pPage->Private.hGVM, pGVM->hSelf));
1790 rc = VERR_GMM_NOT_PAGE_OWNER;
1791 break;
1792 }
1793 }
1794 else
1795 {
1796 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not private!\n", iPage, paPages[iPage].idPage));
1797 rc = VERR_GMM_PAGE_NOT_PRIVATE;
1798 break;
1799 }
1800 }
1801 else
1802 {
1803 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not found! (private)\n", iPage, paPages[iPage].idPage));
1804 rc = VERR_GMM_PAGE_NOT_FOUND;
1805 break;
1806 }
1807 }
1808
1809 if (paPages[iPage].idSharedPage != NIL_GMM_PAGEID)
1810 {
1811 PGMMPAGE pPage = gmmR0GetPage(pGMM, paPages[iPage].idSharedPage);
1812 if (RT_LIKELY(pPage))
1813 {
1814 if (RT_LIKELY(GMM_PAGE_IS_SHARED(pPage)))
1815 {
1816 AssertCompile(NIL_RTHCPHYS > GMM_GCPHYS_END && GMM_GCPHYS_UNSHAREABLE > GMM_GCPHYS_END);
1817 Assert(pPage->Shared.cRefs);
1818 Assert(pGVM->gmm.s.cSharedPages);
1819 Assert(pGVM->gmm.s.Allocated.cBasePages);
1820
1821 pGVM->gmm.s.cSharedPages--;
1822 pGVM->gmm.s.Allocated.cBasePages--;
1823 if (!--pPage->Shared.cRefs)
1824 gmmR0FreeSharedPage(pGMM, paPages[iPage].idSharedPage, pPage);
1825
1826 paPages[iPage].idSharedPage = NIL_GMM_PAGEID;
1827 }
1828 else
1829 {
1830 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not shared!\n", iPage, paPages[iPage].idSharedPage));
1831 rc = VERR_GMM_PAGE_NOT_SHARED;
1832 break;
1833 }
1834 }
1835 else
1836 {
1837 Log(("GMMR0AllocateHandyPages: #%#x/%#x: Not found! (shared)\n", iPage, paPages[iPage].idSharedPage));
1838 rc = VERR_GMM_PAGE_NOT_FOUND;
1839 break;
1840 }
1841 }
1842 }
1843
1844 /*
1845 * Join paths with GMMR0AllocatePages for the allocation.
1846 */
1847 if (RT_SUCCESS(rc))
1848 rc = gmmR0AllocatePages(pGMM, pGVM, cPagesToAlloc, paPages, GMMACCOUNT_BASE);
1849 }
1850 else
1851 rc = VERR_WRONG_ORDER;
1852
1853 RTSemFastMutexRelease(pGMM->Mtx);
1854 LogFlow(("GMMR0AllocateHandyPages: returns %Rrc\n", rc));
1855 return rc;
1856}
1857
1858
1859/**
1860 * Allocate one or more pages.
1861 *
1862 * This is typically used for ROMs and MMIO2 (VRAM) during VM creation.
1863 *
1864 * @returns VBox status code:
1865 * @retval xxx
1866 *
1867 * @param pVM Pointer to the shared VM structure.
1868 * @param cPages The number of pages to allocate.
1869 * @param paPages Pointer to the page descriptors.
1870 * See GMMPAGEDESC for details on what is expected on input.
1871 * @param enmAccount The account to charge.
1872 *
1873 * @thread EMT.
1874 */
1875GMMR0DECL(int) GMMR0AllocatePages(PVM pVM, uint32_t cPages, PGMMPAGEDESC paPages, GMMACCOUNT enmAccount)
1876{
1877 LogFlow(("GMMR0AllocatePages: pVM=%p cPages=%#x paPages=%p enmAccount=%d\n", pVM, cPages, paPages, enmAccount));
1878
1879 /*
1880 * Validate, get basics and take the semaphore.
1881 */
1882 PGMM pGMM;
1883 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
1884 PGVM pGVM = GVMMR0ByVM(pVM);
1885 if (!pGVM)
1886 return VERR_INVALID_PARAMETER;
1887 if (pGVM->hEMT != RTThreadNativeSelf())
1888 return VERR_NOT_OWNER;
1889
1890 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
1891 AssertMsgReturn(enmAccount > GMMACCOUNT_INVALID && enmAccount < GMMACCOUNT_END, ("%d\n", enmAccount), VERR_INVALID_PARAMETER);
1892 AssertMsgReturn(cPages > 0 && cPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cPages), VERR_INVALID_PARAMETER);
1893
1894 for (unsigned iPage = 0; iPage < cPages; iPage++)
1895 {
1896 AssertMsgReturn( paPages[iPage].HCPhysGCPhys == NIL_RTHCPHYS
1897 || paPages[iPage].HCPhysGCPhys == GMM_GCPHYS_UNSHAREABLE
1898 || ( enmAccount == GMMACCOUNT_BASE
1899 && paPages[iPage].HCPhysGCPhys < GMM_GCPHYS_END
1900 && !(paPages[iPage].HCPhysGCPhys & PAGE_OFFSET_MASK)),
1901 ("#%#x: %RHp enmAccount=%d\n", iPage, paPages[iPage].HCPhysGCPhys, enmAccount),
1902 VERR_INVALID_PARAMETER);
1903 AssertMsgReturn(paPages[iPage].idPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
1904 AssertMsgReturn(paPages[iPage].idSharedPage == NIL_GMM_PAGEID, ("#%#x: %#x\n", iPage, paPages[iPage].idSharedPage), VERR_INVALID_PARAMETER);
1905 }
1906
1907 int rc = RTSemFastMutexRequest(pGMM->Mtx);
1908 AssertRC(rc);
1909
1910 /* No allocations before the initial reservation has been made! */
1911 if ( pGVM->gmm.s.Reserved.cBasePages
1912 && pGVM->gmm.s.Reserved.cFixedPages
1913 && pGVM->gmm.s.Reserved.cShadowPages)
1914 rc = gmmR0AllocatePages(pGMM, pGVM, cPages, paPages, enmAccount);
1915 else
1916 rc = VERR_WRONG_ORDER;
1917
1918 RTSemFastMutexRelease(pGMM->Mtx);
1919 LogFlow(("GMMR0UpdateReservation: returns %Rrc\n", rc));
1920 return rc;
1921}
1922
1923
1924/**
1925 * VMMR0 request wrapper for GMMR0AllocatePages.
1926 *
1927 * @returns see GMMR0AllocatePages.
1928 * @param pVM Pointer to the shared VM structure.
1929 * @param pReq The request packet.
1930 */
1931GMMR0DECL(int) GMMR0AllocatePagesReq(PVM pVM, PGMMALLOCATEPAGESREQ pReq)
1932{
1933 /*
1934 * Validate input and pass it on.
1935 */
1936 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
1937 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
1938 AssertMsgReturn(pReq->Hdr.cbReq >= RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[0]),
1939 ("%#x < %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[0])),
1940 VERR_INVALID_PARAMETER);
1941 AssertMsgReturn(pReq->Hdr.cbReq == RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[pReq->cPages]),
1942 ("%#x != %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMALLOCATEPAGESREQ, aPages[pReq->cPages])),
1943 VERR_INVALID_PARAMETER);
1944
1945 return GMMR0AllocatePages(pVM, pReq->cPages, &pReq->aPages[0], pReq->enmAccount);
1946}
1947
1948
1949/**
1950 * Frees a chunk, giving it back to the host OS.
1951 *
1952 * @param pGMM Pointer to the GMM instance.
1953 * @param pChunk The chunk to free.
1954 */
1955static void gmmR0FreeChunk(PGMM pGMM, PGMMCHUNK pChunk)
1956{
1957 /*
1958 * If there are current mappings of the chunk, then request the
1959 * VMs to unmap them. Reposition the chunk in the free list so
1960 * it won't be a likely candidate for allocations.
1961 */
1962 if (pChunk->cMappings)
1963 {
1964 /** @todo R0 -> VM request */
1965
1966 }
1967 else
1968 {
1969 /*
1970 * Try free the memory object.
1971 */
1972 int rc = RTR0MemObjFree(pChunk->MemObj, false /* fFreeMappings */);
1973 if (RT_SUCCESS(rc))
1974 {
1975 pChunk->MemObj = NIL_RTR0MEMOBJ;
1976
1977 /*
1978 * Unlink it from everywhere.
1979 */
1980 gmmR0UnlinkChunk(pChunk);
1981
1982 PAVLU32NODECORE pCore = RTAvlU32Remove(&pGMM->pChunks, pChunk->Core.Key);
1983 Assert(pCore == &pChunk->Core); NOREF(pCore);
1984
1985 PGMMCHUNKTLBE pTlbe = &pGMM->ChunkTLB.aEntries[GMM_CHUNKTLB_IDX(pCore->Key)];
1986 if (pTlbe->pChunk == pChunk)
1987 {
1988 pTlbe->idChunk = NIL_GMM_CHUNKID;
1989 pTlbe->pChunk = NULL;
1990 }
1991
1992 Assert(pGMM->cChunks > 0);
1993 pGMM->cChunks--;
1994
1995 /*
1996 * Free the Chunk ID and struct.
1997 */
1998 gmmR0FreeChunkId(pGMM, pChunk->Core.Key);
1999 pChunk->Core.Key = NIL_GMM_CHUNKID;
2000
2001 RTMemFree(pChunk->paMappings);
2002 pChunk->paMappings = NULL;
2003
2004 RTMemFree(pChunk);
2005 }
2006 else
2007 AssertRC(rc);
2008 }
2009}
2010
2011
2012/**
2013 * Free page worker.
2014 *
2015 * The caller does all the statistic decrementing, we do all the incrementing.
2016 *
2017 * @param pGMM Pointer to the GMM instance data.
2018 * @param pChunk Pointer to the chunk this page belongs to.
2019 * @param pPage Pointer to the page.
2020 */
2021static void gmmR0FreePageWorker(PGMM pGMM, PGMMCHUNK pChunk, PGMMPAGE pPage)
2022{
2023 /*
2024 * Put the page on the free list.
2025 */
2026 pPage->u = 0;
2027 pPage->Free.u2State = GMM_PAGE_STATE_FREE;
2028 Assert(pChunk->iFreeHead < RT_ELEMENTS(pChunk->aPages) || pChunk->iFreeHead == UINT16_MAX);
2029 pPage->Free.iNext = pChunk->iFreeHead;
2030 pChunk->iFreeHead = pPage - &pChunk->aPages[0];
2031
2032 /*
2033 * Update statistics (the cShared/cPrivate stats are up to date already),
2034 * and relink the chunk if necessary.
2035 */
2036 if ((pChunk->cFree & GMM_CHUNK_FREE_SET_MASK) == 0)
2037 {
2038 gmmR0UnlinkChunk(pChunk);
2039 pChunk->cFree++;
2040 gmmR0LinkChunk(pChunk, pChunk->cShared ? &pGMM->Shared : &pGMM->Private);
2041 }
2042 else
2043 {
2044 pChunk->cFree++;
2045 pChunk->pSet->cPages++;
2046
2047 /*
2048 * If the chunk becomes empty, consider giving memory back to the host OS.
2049 *
2050 * The current strategy is to try give it back if there are other chunks
2051 * in this free list, meaning if there are at least 240 free pages in this
2052 * category. Note that since there are probably mappings of the chunk,
2053 * it won't be freed up instantly, which probably screws up this logic
2054 * a bit...
2055 */
2056 if (RT_UNLIKELY( pChunk->cFree == GMM_CHUNK_NUM_PAGES
2057 && pChunk->pFreeNext
2058 && pChunk->pFreePrev))
2059 gmmR0FreeChunk(pGMM, pChunk);
2060 }
2061}
2062
2063
2064/**
2065 * Frees a shared page, the page is known to exist and be valid and such.
2066 *
2067 * @param pGMM Pointer to the GMM instance.
2068 * @param idPage The Page ID
2069 * @param pPage The page structure.
2070 */
2071DECLINLINE(void) gmmR0FreeSharedPage(PGMM pGMM, uint32_t idPage, PGMMPAGE pPage)
2072{
2073 PGMMCHUNK pChunk = gmmR0GetChunk(pGMM, idPage >> GMM_CHUNKID_SHIFT);
2074 Assert(pChunk);
2075 Assert(pChunk->cFree < GMM_CHUNK_NUM_PAGES);
2076 Assert(pChunk->cShared > 0);
2077 Assert(pGMM->cSharedPages > 0);
2078 Assert(pGMM->cAllocatedPages > 0);
2079 Assert(!pPage->Shared.cRefs);
2080
2081 pChunk->cShared--;
2082 pGMM->cAllocatedPages--;
2083 pGMM->cSharedPages--;
2084 gmmR0FreePageWorker(pGMM, pChunk, pPage);
2085}
2086
2087
2088/**
2089 * Frees a private page, the page is known to exist and be valid and such.
2090 *
2091 * @param pGMM Pointer to the GMM instance.
2092 * @param idPage The Page ID
2093 * @param pPage The page structure.
2094 */
2095DECLINLINE(void) gmmR0FreePrivatePage(PGMM pGMM, uint32_t idPage, PGMMPAGE pPage)
2096{
2097 PGMMCHUNK pChunk = gmmR0GetChunk(pGMM, idPage >> GMM_CHUNKID_SHIFT);
2098 Assert(pChunk);
2099 Assert(pChunk->cFree < GMM_CHUNK_NUM_PAGES);
2100 Assert(pChunk->cPrivate > 0);
2101 Assert(pGMM->cAllocatedPages > 0);
2102
2103 pChunk->cPrivate--;
2104 pGMM->cAllocatedPages--;
2105 gmmR0FreePageWorker(pGMM, pChunk, pPage);
2106}
2107
2108
2109/**
2110 * Common worker for GMMR0FreePages and GMMR0BalloonedPages.
2111 *
2112 * @returns VBox status code:
2113 * @retval xxx
2114 *
2115 * @param pGMM Pointer to the GMM instance data.
2116 * @param pGVM Pointer to the shared VM structure.
2117 * @param cPages The number of pages to free.
2118 * @param paPages Pointer to the page descriptors.
2119 * @param enmAccount The account this relates to.
2120 */
2121static int gmmR0FreePages(PGMM pGMM, PGVM pGVM, uint32_t cPages, PGMMFREEPAGEDESC paPages, GMMACCOUNT enmAccount)
2122{
2123 /*
2124 * Check that the request isn't impossible wrt to the account status.
2125 */
2126 switch (enmAccount)
2127 {
2128 case GMMACCOUNT_BASE:
2129 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cBasePages < cPages))
2130 {
2131 Log(("gmmR0FreePages: allocated=%#llx cPages=%#x!\n", pGVM->gmm.s.Allocated.cBasePages, cPages));
2132 return VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2133 }
2134 break;
2135 case GMMACCOUNT_SHADOW:
2136 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cShadowPages < cPages))
2137 {
2138 Log(("gmmR0FreePages: allocated=%#llx cPages=%#x!\n", pGVM->gmm.s.Allocated.cShadowPages, cPages));
2139 return VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2140 }
2141 break;
2142 case GMMACCOUNT_FIXED:
2143 if (RT_UNLIKELY(pGVM->gmm.s.Allocated.cFixedPages < cPages))
2144 {
2145 Log(("gmmR0FreePages: allocated=%#llx cPages=%#x!\n", pGVM->gmm.s.Allocated.cFixedPages, cPages));
2146 return VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2147 }
2148 break;
2149 default:
2150 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
2151 }
2152
2153 /*
2154 * Walk the descriptors and free the pages.
2155 *
2156 * Statistics (except the account) are being updated as we go along,
2157 * unlike the alloc code. Also, stop on the first error.
2158 */
2159 int rc = VINF_SUCCESS;
2160 uint32_t iPage;
2161 for (iPage = 0; iPage < cPages; iPage++)
2162 {
2163 uint32_t idPage = paPages[iPage].idPage;
2164 PGMMPAGE pPage = gmmR0GetPage(pGMM, idPage);
2165 if (RT_LIKELY(pPage))
2166 {
2167 if (RT_LIKELY(GMM_PAGE_IS_PRIVATE(pPage)))
2168 {
2169 if (RT_LIKELY(pPage->Private.hGVM == pGVM->hSelf))
2170 {
2171 Assert(pGVM->gmm.s.cPrivatePages);
2172 pGVM->gmm.s.cPrivatePages--;
2173 gmmR0FreePrivatePage(pGMM, idPage, pPage);
2174 }
2175 else
2176 {
2177 Log(("gmmR0AllocatePages: #%#x/%#x: not owner! hGVM=%#x hSelf=%#x\n", iPage, idPage,
2178 pPage->Private.hGVM, pGVM->hEMT));
2179 rc = VERR_GMM_NOT_PAGE_OWNER;
2180 break;
2181 }
2182 }
2183 else if (RT_LIKELY(GMM_PAGE_IS_SHARED(pPage)))
2184 {
2185 Assert(pGVM->gmm.s.cSharedPages);
2186 pGVM->gmm.s.cSharedPages--;
2187 Assert(pPage->Shared.cRefs);
2188 if (!--pPage->Shared.cRefs)
2189 gmmR0FreeSharedPage(pGMM, idPage, pPage);
2190 }
2191 else
2192 {
2193 Log(("gmmR0AllocatePages: #%#x/%#x: already free!\n", iPage, idPage));
2194 rc = VERR_GMM_PAGE_ALREADY_FREE;
2195 break;
2196 }
2197 }
2198 else
2199 {
2200 Log(("gmmR0AllocatePages: #%#x/%#x: not found!\n", iPage, idPage));
2201 rc = VERR_GMM_PAGE_NOT_FOUND;
2202 break;
2203 }
2204 paPages[iPage].idPage = NIL_GMM_PAGEID;
2205 }
2206
2207 /*
2208 * Update the account.
2209 */
2210 switch (enmAccount)
2211 {
2212 case GMMACCOUNT_BASE: pGVM->gmm.s.Allocated.cBasePages -= iPage;
2213 case GMMACCOUNT_SHADOW: pGVM->gmm.s.Allocated.cShadowPages -= iPage;
2214 case GMMACCOUNT_FIXED: pGVM->gmm.s.Allocated.cFixedPages -= iPage;
2215 default:
2216 AssertMsgFailedReturn(("enmAccount=%d\n", enmAccount), VERR_INTERNAL_ERROR);
2217 }
2218
2219 /*
2220 * Any threshold stuff to be done here?
2221 */
2222
2223 return rc;
2224}
2225
2226
2227/**
2228 * Free one or more pages.
2229 *
2230 * This is typically used at reset time or power off.
2231 *
2232 * @returns VBox status code:
2233 * @retval xxx
2234 *
2235 * @param pVM Pointer to the shared VM structure.
2236 * @param cPages The number of pages to allocate.
2237 * @param paPages Pointer to the page descriptors containing the Page IDs for each page.
2238 * @param enmAccount The account this relates to.
2239 * @thread EMT.
2240 */
2241GMMR0DECL(int) GMMR0FreePages(PVM pVM, uint32_t cPages, PGMMFREEPAGEDESC paPages, GMMACCOUNT enmAccount)
2242{
2243 LogFlow(("GMMR0FreePages: pVM=%p cPages=%#x paPages=%p enmAccount=%d\n", pVM, cPages, paPages, enmAccount));
2244
2245 /*
2246 * Validate input and get the basics.
2247 */
2248 PGMM pGMM;
2249 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2250 PGVM pGVM = GVMMR0ByVM(pVM);
2251 if (!pGVM)
2252 return VERR_INVALID_PARAMETER;
2253 if (pGVM->hEMT != RTThreadNativeSelf())
2254 return VERR_NOT_OWNER;
2255
2256 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
2257 AssertMsgReturn(enmAccount > GMMACCOUNT_INVALID && enmAccount < GMMACCOUNT_END, ("%d\n", enmAccount), VERR_INVALID_PARAMETER);
2258 AssertMsgReturn(cPages > 0 && cPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cPages), VERR_INVALID_PARAMETER);
2259
2260 for (unsigned iPage = 0; iPage < cPages; iPage++)
2261 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
2262 /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
2263 ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
2264
2265 /*
2266 * Take the semaphore and call the worker function.
2267 */
2268 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2269 AssertRC(rc);
2270
2271 rc = gmmR0FreePages(pGMM, pGVM, cPages, paPages, enmAccount);
2272
2273 RTSemFastMutexRelease(pGMM->Mtx);
2274 LogFlow(("GMMR0FreePages: returns %Rrc\n", rc));
2275 return rc;
2276}
2277
2278
2279/**
2280 * VMMR0 request wrapper for GMMR0FreePages.
2281 *
2282 * @returns see GMMR0FreePages.
2283 * @param pVM Pointer to the shared VM structure.
2284 * @param pReq The request packet.
2285 */
2286GMMR0DECL(int) GMMR0FreePagesReq(PVM pVM, PGMMFREEPAGESREQ pReq)
2287{
2288 /*
2289 * Validate input and pass it on.
2290 */
2291 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2292 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2293 AssertMsgReturn(pReq->Hdr.cbReq >= RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[0]),
2294 ("%#x < %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[0])),
2295 VERR_INVALID_PARAMETER);
2296 AssertMsgReturn(pReq->Hdr.cbReq == RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[pReq->cPages]),
2297 ("%#x != %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMFREEPAGESREQ, aPages[pReq->cPages])),
2298 VERR_INVALID_PARAMETER);
2299
2300 return GMMR0FreePages(pVM, pReq->cPages, &pReq->aPages[0], pReq->enmAccount);
2301}
2302
2303
2304/**
2305 * Report back on a memory ballooning request.
2306 *
2307 * The request may or may not have been initiated by the GMM. If it was initiated
2308 * by the GMM it is important that this function is called even if no pages was
2309 * ballooned.
2310 *
2311 * Since the whole purpose of ballooning is to free up guest RAM pages, this API
2312 * may also be given a set of related pages to be freed. These pages are assumed
2313 * to be on the base account.
2314 *
2315 * @returns VBox status code:
2316 * @retval xxx
2317 *
2318 * @param pVM Pointer to the shared VM structure.
2319 * @param cBalloonedPages The number of pages that was ballooned.
2320 * @param cPagesToFree The number of pages to be freed.
2321 * @param paPages Pointer to the page descriptors for the pages that's to be freed.
2322 * @param fCompleted Indicates whether the ballooning request was completed (true) or
2323 * if there is more pages to come (false). If the ballooning was not
2324 * not triggered by the GMM, don't set this.
2325 * @thread EMT.
2326 */
2327GMMR0DECL(int) GMMR0BalloonedPages(PVM pVM, uint32_t cBalloonedPages, uint32_t cPagesToFree, PGMMFREEPAGEDESC paPages, bool fCompleted)
2328{
2329 LogFlow(("GMMR0BalloonedPages: pVM=%p cBalloonedPages=%#x cPagestoFree=%#x paPages=%p enmAccount=%d fCompleted=%RTbool\n",
2330 pVM, cBalloonedPages, cPagesToFree, paPages, fCompleted));
2331
2332 /*
2333 * Validate input and get the basics.
2334 */
2335 PGMM pGMM;
2336 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2337 PGVM pGVM = GVMMR0ByVM(pVM);
2338 if (!pGVM)
2339 return VERR_INVALID_PARAMETER;
2340 if (pGVM->hEMT != RTThreadNativeSelf())
2341 return VERR_NOT_OWNER;
2342
2343 AssertPtrReturn(paPages, VERR_INVALID_PARAMETER);
2344 AssertMsgReturn(cBalloonedPages >= 0 && cBalloonedPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cBalloonedPages), VERR_INVALID_PARAMETER);
2345 AssertMsgReturn(cPagesToFree >= 0 && cPagesToFree <= cBalloonedPages, ("%#x\n", cPagesToFree), VERR_INVALID_PARAMETER);
2346
2347 for (unsigned iPage = 0; iPage < cPagesToFree; iPage++)
2348 AssertMsgReturn( paPages[iPage].idPage <= GMM_PAGEID_LAST
2349 /*|| paPages[iPage].idPage == NIL_GMM_PAGEID*/,
2350 ("#%#x: %#x\n", iPage, paPages[iPage].idPage), VERR_INVALID_PARAMETER);
2351
2352 /*
2353 * Take the sempahore and do some more validations.
2354 */
2355 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2356 AssertRC(rc);
2357 if (pGVM->gmm.s.Allocated.cBasePages >= cPagesToFree)
2358 {
2359 /*
2360 * Record the ballooned memory.
2361 */
2362 pGMM->cBalloonedPages += cBalloonedPages;
2363 if (pGVM->gmm.s.cReqBalloonedPages)
2364 {
2365 pGVM->gmm.s.cBalloonedPages += cBalloonedPages;
2366 pGVM->gmm.s.cReqActuallyBalloonedPages += cBalloonedPages;
2367 if (fCompleted)
2368 {
2369 Log(("GMMR0BalloonedPages: +%#x - Global=%#llx; / VM: Total=%#llx Req=%#llx Actual=%#llx (completed)\n", cBalloonedPages,
2370 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages, pGVM->gmm.s.cReqBalloonedPages, pGVM->gmm.s.cReqActuallyBalloonedPages));
2371
2372 /*
2373 * Anything we need to do here now when the request has been completed?
2374 */
2375 pGVM->gmm.s.cReqBalloonedPages = 0;
2376 }
2377 else
2378 Log(("GMMR0BalloonedPages: +%#x - Global=%#llx / VM: Total=%#llx Req=%#llx Actual=%#llx (pending)\n", cBalloonedPages,
2379 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages, pGVM->gmm.s.cReqBalloonedPages, pGVM->gmm.s.cReqActuallyBalloonedPages));
2380 }
2381 else
2382 {
2383 pGVM->gmm.s.cBalloonedPages += cBalloonedPages;
2384 Log(("GMMR0BalloonedPages: +%#x - Global=%#llx / VM: Total=%#llx (user)\n",
2385 cBalloonedPages, pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages));
2386 }
2387
2388 /*
2389 * Any pages to free?
2390 */
2391 if (cPagesToFree)
2392 rc = gmmR0FreePages(pGMM, pGVM, cPagesToFree, paPages, GMMACCOUNT_BASE);
2393 }
2394 else
2395 {
2396 rc = VERR_GMM_ATTEMPT_TO_FREE_TOO_MUCH;
2397 }
2398
2399 RTSemFastMutexRelease(pGMM->Mtx);
2400 LogFlow(("GMMR0BalloonedPages: returns %Rrc\n", rc));
2401 return rc;
2402}
2403
2404
2405/**
2406 * VMMR0 request wrapper for GMMR0BalloonedPages.
2407 *
2408 * @returns see GMMR0BalloonedPages.
2409 * @param pVM Pointer to the shared VM structure.
2410 * @param pReq The request packet.
2411 */
2412GMMR0DECL(int) GMMR0BalloonedPagesReq(PVM pVM, PGMMBALLOONEDPAGESREQ pReq)
2413{
2414 /*
2415 * Validate input and pass it on.
2416 */
2417 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2418 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2419 AssertMsgReturn(pReq->Hdr.cbReq >= RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[0]),
2420 ("%#x < %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[0])),
2421 VERR_INVALID_PARAMETER);
2422 AssertMsgReturn(pReq->Hdr.cbReq == RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[pReq->cPagesToFree]),
2423 ("%#x != %#x\n", pReq->Hdr.cbReq, RT_UOFFSETOF(GMMBALLOONEDPAGESREQ, aPages[pReq->cPagesToFree])),
2424 VERR_INVALID_PARAMETER);
2425
2426 return GMMR0BalloonedPages(pVM, pReq->cBalloonedPages, pReq->cPagesToFree, &pReq->aPages[0], pReq->fCompleted);
2427}
2428
2429
2430/**
2431 * Report balloon deflating.
2432 *
2433 * @returns VBox status code:
2434 * @retval xxx
2435 *
2436 * @param pVM Pointer to the shared VM structure.
2437 * @param cPages The number of pages that was let out of the balloon.
2438 * @thread EMT.
2439 */
2440GMMR0DECL(int) GMMR0DeflatedBalloon(PVM pVM, uint32_t cPages)
2441{
2442 LogFlow(("GMMR0DeflatedBalloon: pVM=%p cPages=%#x\n", pVM, cPages));
2443
2444 /*
2445 * Validate input and get the basics.
2446 */
2447 PGMM pGMM;
2448 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2449 PGVM pGVM = GVMMR0ByVM(pVM);
2450 if (!pGVM)
2451 return VERR_INVALID_PARAMETER;
2452 if (pGVM->hEMT != RTThreadNativeSelf())
2453 return VERR_NOT_OWNER;
2454
2455 AssertMsgReturn(cPages >= 0 && cPages < RT_BIT(32 - PAGE_SHIFT), ("%#x\n", cPages), VERR_INVALID_PARAMETER);
2456
2457 /*
2458 * Take the sempahore and do some more validations.
2459 */
2460 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2461 AssertRC(rc);
2462
2463 if (pGVM->gmm.s.cBalloonedPages < cPages)
2464 {
2465 Assert(pGMM->cBalloonedPages >= pGVM->gmm.s.cBalloonedPages);
2466
2467 /*
2468 * Record it.
2469 */
2470 pGMM->cBalloonedPages -= cPages;
2471 pGVM->gmm.s.cBalloonedPages -= cPages;
2472 if (pGVM->gmm.s.cReqDeflatePages)
2473 {
2474 Log(("GMMR0BalloonedPages: -%#x - Global=%#llx / VM: Total=%#llx Req=%#llx\n", cPages,
2475 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages, pGVM->gmm.s.cReqDeflatePages));
2476
2477 /*
2478 * Anything we need to do here now when the request has been completed?
2479 */
2480 pGVM->gmm.s.cReqDeflatePages = 0;
2481 }
2482 else
2483 Log(("GMMR0BalloonedPages: -%#x - Global=%#llx / VM: Total=%#llx\n", cPages,
2484 pGMM->cBalloonedPages, pGVM->gmm.s.cBalloonedPages));
2485 }
2486 else
2487 {
2488 Log(("GMMR0DeflatedBalloon: cBalloonedPages=%#llx cPages=%#x\n", pGVM->gmm.s.cBalloonedPages, cPages));
2489 rc = VERR_GMM_ATTEMPT_TO_DEFLATE_TOO_MUCH;
2490 }
2491
2492 RTSemFastMutexRelease(pGMM->Mtx);
2493 LogFlow(("GMMR0BalloonedPages: returns %Rrc\n", rc));
2494 return rc;
2495}
2496
2497
2498/**
2499 * Unmaps a chunk previously mapped into the address space of the current process.
2500 *
2501 * @returns VBox status code.
2502 * @param pGMM Pointer to the GMM instance data.
2503 * @param pGVM Pointer to the Global VM structure.
2504 * @param pChunk Pointer to the chunk to be unmapped.
2505 */
2506static int gmmR0UnmapChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk)
2507{
2508 /*
2509 * Find the mapping and try unmapping it.
2510 */
2511 for (uint32_t i = 0; i < pChunk->cMappings; i++)
2512 {
2513 Assert(pChunk->paMappings[i].pGVM && pChunk->paMappings[i].MapObj != NIL_RTR0MEMOBJ);
2514 if (pChunk->paMappings[i].pGVM == pGVM)
2515 {
2516 /* unmap */
2517 int rc = RTR0MemObjFree(pChunk->paMappings[i].MapObj, false /* fFreeMappings (NA) */);
2518 if (RT_SUCCESS(rc))
2519 {
2520 /* update the record. */
2521 pChunk->cMappings--;
2522 if (i < pChunk->cMappings)
2523 pChunk->paMappings[i] = pChunk->paMappings[pChunk->cMappings];
2524 pChunk->paMappings[pChunk->cMappings].MapObj = NIL_RTR0MEMOBJ;
2525 pChunk->paMappings[pChunk->cMappings].pGVM = NULL;
2526 }
2527 return rc;
2528 }
2529 }
2530
2531 Log(("gmmR0MapChunk: Chunk %#x is not mapped into pGVM=%p/%#x\n", pChunk->Core.Key, pGVM, pGVM->hSelf));
2532 return VERR_GMM_CHUNK_NOT_MAPPED;
2533}
2534
2535
2536/**
2537 * Maps a chunk into the user address space of the current process.
2538 *
2539 * @returns VBox status code.
2540 * @param pGMM Pointer to the GMM instance data.
2541 * @param pGVM Pointer to the Global VM structure.
2542 * @param pChunk Pointer to the chunk to be mapped.
2543 * @param ppvR3 Where to store the ring-3 address of the mapping.
2544 * In the VERR_GMM_CHUNK_ALREADY_MAPPED case, this will be
2545 * contain the address of the existing mapping.
2546 */
2547static int gmmR0MapChunk(PGMM pGMM, PGVM pGVM, PGMMCHUNK pChunk, PRTR3PTR ppvR3)
2548{
2549 /*
2550 * Check to see if the chunk is already mapped.
2551 */
2552 for (uint32_t i = 0; i < pChunk->cMappings; i++)
2553 {
2554 Assert(pChunk->paMappings[i].pGVM && pChunk->paMappings[i].MapObj != NIL_RTR0MEMOBJ);
2555 if (pChunk->paMappings[i].pGVM == pGVM)
2556 {
2557 *ppvR3 = RTR0MemObjAddressR3(pChunk->paMappings[i].MapObj);
2558 Log(("gmmR0MapChunk: chunk %#x is already mapped at %p!\n", pChunk->Core.Key, *ppvR3));
2559 return VERR_GMM_CHUNK_ALREADY_MAPPED;
2560 }
2561 }
2562
2563 /*
2564 * Do the mapping.
2565 */
2566 RTR0MEMOBJ MapObj;
2567 int rc = RTR0MemObjMapUser(&MapObj, pChunk->MemObj, (RTR3PTR)-1, 0, RTMEM_PROT_READ | RTMEM_PROT_WRITE, NIL_RTR0PROCESS);
2568 if (RT_SUCCESS(rc))
2569 {
2570 /* reallocate the array? */
2571 if ((pChunk->cMappings & 1 /*7*/) == 0)
2572 {
2573 void *pvMappings = RTMemRealloc(pChunk->paMappings, (pChunk->cMappings + 2 /*8*/) * sizeof(pChunk->paMappings[0]));
2574 if (RT_UNLIKELY(pvMappings))
2575 {
2576 rc = RTR0MemObjFree(MapObj, false /* fFreeMappings (NA) */);
2577 AssertRC(rc);
2578 return VERR_NO_MEMORY;
2579 }
2580 pChunk->paMappings = (PGMMCHUNKMAP)pvMappings;
2581 }
2582
2583 /* insert new entry */
2584 pChunk->paMappings[pChunk->cMappings].MapObj = MapObj;
2585 pChunk->paMappings[pChunk->cMappings].pGVM = pGVM;
2586 pChunk->cMappings++;
2587
2588 *ppvR3 = RTR0MemObjAddressR3(MapObj);
2589 }
2590
2591 return rc;
2592}
2593
2594
2595/**
2596 * Map a chunk and/or unmap another chunk.
2597 *
2598 * The mapping and unmapping applies to the current process.
2599 *
2600 * This API does two things because it saves a kernel call per mapping when
2601 * when the ring-3 mapping cache is full.
2602 *
2603 * @returns VBox status code.
2604 * @param pVM The VM.
2605 * @param idChunkMap The chunk to map. NIL_GMM_CHUNKID if nothing to map.
2606 * @param idChunkUnmap The chunk to unmap. NIL_GMM_CHUNKID if nothing to unmap.
2607 * @param ppvR3 Where to store the address of the mapped chunk. NULL is ok if nothing to map.
2608 * @thread EMT
2609 */
2610GMMR0DECL(int) GMMR0MapUnmapChunk(PVM pVM, uint32_t idChunkMap, uint32_t idChunkUnmap, PRTR3PTR ppvR3)
2611{
2612 LogFlow(("GMMR0MapUnmapChunk: pVM=%p idChunkMap=%#x idChunkUnmap=%#x ppvR3=%p\n",
2613 pVM, idChunkMap, idChunkUnmap, ppvR3));
2614
2615 /*
2616 * Validate input and get the basics.
2617 */
2618 PGMM pGMM;
2619 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2620 PGVM pGVM = GVMMR0ByVM(pVM);
2621 if (!pGVM)
2622 return VERR_INVALID_PARAMETER;
2623 if (pGVM->hEMT != RTThreadNativeSelf())
2624 return VERR_NOT_OWNER;
2625
2626 AssertCompile(NIL_GMM_CHUNKID == 0);
2627 AssertMsgReturn(idChunkMap <= GMM_CHUNKID_LAST, ("%#x\n", idChunkMap), VERR_INVALID_PARAMETER);
2628 AssertMsgReturn(idChunkUnmap <= GMM_CHUNKID_LAST, ("%#x\n", idChunkUnmap), VERR_INVALID_PARAMETER);
2629
2630 if ( idChunkMap == NIL_GMM_CHUNKID
2631 && idChunkUnmap == NIL_GMM_CHUNKID)
2632 return VERR_INVALID_PARAMETER;
2633
2634 if (idChunkMap != NIL_GMM_CHUNKID)
2635 {
2636 AssertPtrReturn(ppvR3, VERR_INVALID_POINTER);
2637 *ppvR3 = NIL_RTR3PTR;
2638 }
2639
2640 if (pGMM->fLegacyMode)
2641 {
2642 Log(("GMMR0MapUnmapChunk: legacy mode!\n"));
2643 return VERR_NOT_SUPPORTED;
2644 }
2645
2646 /*
2647 * Take the semaphore and do the work.
2648 *
2649 * The unmapping is done last since it's easier to undo a mapping than
2650 * undoing an unmapping. The ring-3 mapping cache cannot not be so big
2651 * that it pushes the user virtual address space to within a chunk of
2652 * it it's limits, so, no problem here.
2653 */
2654 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2655 AssertRC(rc);
2656
2657 PGMMCHUNK pMap = NULL;
2658 if (idChunkMap != NIL_GVM_HANDLE)
2659 {
2660 pMap = gmmR0GetChunk(pGMM, idChunkMap);
2661 if (RT_LIKELY(pMap))
2662 rc = gmmR0MapChunk(pGMM, pGVM, pMap, ppvR3);
2663 else
2664 {
2665 Log(("GMMR0MapUnmapChunk: idChunkMap=%#x\n", idChunkMap));
2666 rc = VERR_GMM_CHUNK_NOT_FOUND;
2667 }
2668 }
2669
2670 if ( idChunkUnmap != NIL_GMM_CHUNKID
2671 && RT_SUCCESS(rc))
2672 {
2673 PGMMCHUNK pUnmap = gmmR0GetChunk(pGMM, idChunkUnmap);
2674 if (RT_LIKELY(pUnmap))
2675 rc = gmmR0UnmapChunk(pGMM, pGVM, pUnmap);
2676 else
2677 {
2678 Log(("GMMR0MapUnmapChunk: idChunkUnmap=%#x\n", idChunkUnmap));
2679 rc = VERR_GMM_CHUNK_NOT_FOUND;
2680 }
2681
2682 if (RT_FAILURE(rc) && pMap)
2683 gmmR0UnmapChunk(pGMM, pGVM, pMap);
2684 }
2685
2686 RTSemFastMutexRelease(pGMM->Mtx);
2687
2688 LogFlow(("GMMR0MapUnmapChunk: returns %Rrc\n", rc));
2689 return rc;
2690}
2691
2692
2693/**
2694 * VMMR0 request wrapper for GMMR0MapUnmapChunk.
2695 *
2696 * @returns see GMMR0MapUnmapChunk.
2697 * @param pVM Pointer to the shared VM structure.
2698 * @param pReq The request packet.
2699 */
2700GMMR0DECL(int) GMMR0MapUnmapChunkReq(PVM pVM, PGMMMAPUNMAPCHUNKREQ pReq)
2701{
2702 /*
2703 * Validate input and pass it on.
2704 */
2705 AssertPtrReturn(pVM, VERR_INVALID_POINTER);
2706 AssertPtrReturn(pReq, VERR_INVALID_POINTER);
2707 AssertMsgReturn(pReq->Hdr.cbReq == sizeof(*pReq), ("%#x != %#x\n", pReq->Hdr.cbReq, sizeof(*pReq)), VERR_INVALID_PARAMETER);
2708
2709 return GMMR0MapUnmapChunk(pVM, pReq->idChunkMap, pReq->idChunkUnmap, &pReq->pvR3);
2710}
2711
2712
2713/**
2714 * Legacy mode API for supplying pages.
2715 *
2716 * The specified user address points to a allocation chunk sized block that
2717 * will be locked down and used by the GMM when the GM asks for pages.
2718 *
2719 * @returns VBox status code.
2720 * @param pVM The VM.
2721 * @param pvR3 Pointer to the chunk size memory block to lock down.
2722 */
2723GMMR0DECL(int) GMMR0SeedChunk(PVM pVM, RTR3PTR pvR3)
2724{
2725 /*
2726 * Validate input and get the basics.
2727 */
2728 PGMM pGMM;
2729 GMM_GET_VALID_INSTANCE(pGMM, VERR_INTERNAL_ERROR);
2730 PGVM pGVM = GVMMR0ByVM(pVM);
2731 if (!pGVM)
2732 return VERR_INVALID_PARAMETER;
2733 if (pGVM->hEMT != RTThreadNativeSelf())
2734 return VERR_NOT_OWNER;
2735
2736 AssertPtrReturn(pvR3, VERR_INVALID_POINTER);
2737 AssertReturn(!(PAGE_OFFSET_MASK & pvR3), VERR_INVALID_POINTER);
2738
2739 if (!pGMM->fLegacyMode)
2740 {
2741 Log(("GMMR0SeedChunk: not in legacy mode!\n"));
2742 return VERR_NOT_SUPPORTED;
2743 }
2744
2745 /*
2746 * Lock the memory before taking the semaphore.
2747 */
2748 RTR0MEMOBJ MemObj;
2749 int rc = RTR0MemObjLockUser(&MemObj, pvR3, GMM_CHUNK_SIZE, NIL_RTR0PROCESS);
2750 if (RT_SUCCESS(rc))
2751 {
2752 /*
2753 * Take the semaphore and add a new chunk with our hGVM.
2754 */
2755 int rc = RTSemFastMutexRequest(pGMM->Mtx);
2756 AssertRC(rc);
2757
2758 rc = gmmR0RegisterChunk(pGMM, &pGMM->Private, MemObj, pGVM->hSelf);
2759
2760 RTSemFastMutexRelease(pGMM->Mtx);
2761
2762 if (RT_FAILURE(rc))
2763 RTR0MemObjFree(MemObj, false /* fFreeMappings */);
2764 }
2765
2766 return rc;
2767}
2768
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette