VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/TM.cpp@ 93595

Last change on this file since 93595 was 93554, checked in by vboxsync, 3 years ago

VMM: Changed PAGE_SIZE -> GUEST_PAGE_SIZE / HOST_PAGE_SIZE, PAGE_SHIFT -> GUEST_PAGE_SHIFT / HOST_PAGE_SHIFT, and PAGE_OFFSET_MASK -> GUEST_PAGE_OFFSET_MASK / HOST_PAGE_OFFSET_MASK. Also removed most usage of ASMMemIsZeroPage and ASMMemZeroPage since the host and guest page size doesn't need to be the same any more. Some work left to do in the page pool code. bugref:9898

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 183.4 KB
Line 
1/* $Id: TM.cpp 93554 2022-02-02 22:57:02Z vboxsync $ */
2/** @file
3 * TM - Time Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2022 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_tm TM - The Time Manager
19 *
20 * The Time Manager abstracts the CPU clocks and manages timers used by the VMM,
21 * device and drivers.
22 *
23 * @see grp_tm
24 *
25 *
26 * @section sec_tm_clocks Clocks
27 *
28 * There are currently 4 clocks:
29 * - Virtual (guest).
30 * - Synchronous virtual (guest).
31 * - CPU Tick (TSC) (guest). Only current use is rdtsc emulation. Usually a
32 * function of the virtual clock.
33 * - Real (host). This is only used for display updates atm.
34 *
35 * The most important clocks are the three first ones and of these the second is
36 * the most interesting.
37 *
38 *
39 * The synchronous virtual clock is tied to the virtual clock except that it
40 * will take into account timer delivery lag caused by host scheduling. It will
41 * normally never advance beyond the head timer, and when lagging too far behind
42 * it will gradually speed up to catch up with the virtual clock. All devices
43 * implementing time sources accessible to and used by the guest is using this
44 * clock (for timers and other things). This ensures consistency between the
45 * time sources.
46 *
47 * The virtual clock is implemented as an offset to a monotonic, high
48 * resolution, wall clock. The current time source is using the RTTimeNanoTS()
49 * machinery based upon the Global Info Pages (GIP), that is, we're using TSC
50 * deltas (usually 10 ms) to fill the gaps between GIP updates. The result is
51 * a fairly high res clock that works in all contexts and on all hosts. The
52 * virtual clock is paused when the VM isn't in the running state.
53 *
54 * The CPU tick (TSC) is normally virtualized as a function of the synchronous
55 * virtual clock, where the frequency defaults to the host cpu frequency (as we
56 * measure it). In this mode it is possible to configure the frequency. Another
57 * (non-default) option is to use the raw unmodified host TSC values. And yet
58 * another, to tie it to time spent executing guest code. All these things are
59 * configurable should non-default behavior be desirable.
60 *
61 * The real clock is a monotonic clock (when available) with relatively low
62 * resolution, though this a bit host specific. Note that we're currently not
63 * servicing timers using the real clock when the VM is not running, this is
64 * simply because it has not been needed yet therefore not implemented.
65 *
66 *
67 * @subsection subsec_tm_timesync Guest Time Sync / UTC time
68 *
69 * Guest time syncing is primarily taken care of by the VMM device. The
70 * principle is very simple, the guest additions periodically asks the VMM
71 * device what the current UTC time is and makes adjustments accordingly.
72 *
73 * A complicating factor is that the synchronous virtual clock might be doing
74 * catchups and the guest perception is currently a little bit behind the world
75 * but it will (hopefully) be catching up soon as we're feeding timer interrupts
76 * at a slightly higher rate. Adjusting the guest clock to the current wall
77 * time in the real world would be a bad idea then because the guest will be
78 * advancing too fast and run ahead of world time (if the catchup works out).
79 * To solve this problem TM provides the VMM device with an UTC time source that
80 * gets adjusted with the current lag, so that when the guest eventually catches
81 * up the lag it will be showing correct real world time.
82 *
83 *
84 * @section sec_tm_timers Timers
85 *
86 * The timers can use any of the TM clocks described in the previous section.
87 * Each clock has its own scheduling facility, or timer queue if you like.
88 * There are a few factors which makes it a bit complex. First, there is the
89 * usual R0 vs R3 vs. RC thing. Then there are multiple threads, and then there
90 * is the timer thread that periodically checks whether any timers has expired
91 * without EMT noticing. On the API level, all but the create and save APIs
92 * must be multithreaded. EMT will always run the timers.
93 *
94 * The design is using a doubly linked list of active timers which is ordered
95 * by expire date. This list is only modified by the EMT thread. Updates to
96 * the list are batched in a singly linked list, which is then processed by the
97 * EMT thread at the first opportunity (immediately, next time EMT modifies a
98 * timer on that clock, or next timer timeout). Both lists are offset based and
99 * all the elements are therefore allocated from the hyper heap.
100 *
101 * For figuring out when there is need to schedule and run timers TM will:
102 * - Poll whenever somebody queries the virtual clock.
103 * - Poll the virtual clocks from the EM and REM loops.
104 * - Poll the virtual clocks from trap exit path.
105 * - Poll the virtual clocks and calculate first timeout from the halt loop.
106 * - Employ a thread which periodically (100Hz) polls all the timer queues.
107 *
108 *
109 * @image html TMTIMER-Statechart-Diagram.gif
110 *
111 * @section sec_tm_timer Logging
112 *
113 * Level 2: Logs a most of the timer state transitions and queue servicing.
114 * Level 3: Logs a few oddments.
115 * Level 4: Logs TMCLOCK_VIRTUAL_SYNC catch-up events.
116 *
117 */
118
119
120/*********************************************************************************************************************************
121* Header Files *
122*********************************************************************************************************************************/
123#define LOG_GROUP LOG_GROUP_TM
124#ifdef DEBUG_bird
125# define DBGFTRACE_DISABLED /* annoying */
126#endif
127#include <VBox/vmm/tm.h>
128#include <iprt/asm-amd64-x86.h> /* for SUPGetCpuHzFromGip from sup.h */
129#include <VBox/vmm/vmm.h>
130#include <VBox/vmm/mm.h>
131#include <VBox/vmm/hm.h>
132#include <VBox/vmm/nem.h>
133#include <VBox/vmm/gim.h>
134#include <VBox/vmm/ssm.h>
135#include <VBox/vmm/dbgf.h>
136#include <VBox/vmm/dbgftrace.h>
137#include <VBox/vmm/pdmapi.h>
138#include <VBox/vmm/iom.h>
139#include "TMInternal.h"
140#include <VBox/vmm/vm.h>
141#include <VBox/vmm/uvm.h>
142
143#include <VBox/vmm/pdmdev.h>
144#include <VBox/log.h>
145#include <VBox/param.h>
146#include <VBox/err.h>
147
148#include <iprt/asm.h>
149#include <iprt/asm-math.h>
150#include <iprt/assert.h>
151#include <iprt/env.h>
152#include <iprt/file.h>
153#include <iprt/getopt.h>
154#include <iprt/mem.h>
155#include <iprt/rand.h>
156#include <iprt/semaphore.h>
157#include <iprt/string.h>
158#include <iprt/thread.h>
159#include <iprt/time.h>
160#include <iprt/timer.h>
161
162#include "TMInline.h"
163
164
165/*********************************************************************************************************************************
166* Defined Constants And Macros *
167*********************************************************************************************************************************/
168/** The current saved state version.*/
169#define TM_SAVED_STATE_VERSION 3
170
171
172/*********************************************************************************************************************************
173* Internal Functions *
174*********************************************************************************************************************************/
175static bool tmR3HasFixedTSC(PVM pVM);
176static uint64_t tmR3CalibrateTSC(void);
177static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM);
178static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
179#ifdef VBOX_WITH_STATISTICS
180static void tmR3TimerQueueRegisterStats(PVM pVM, PTMTIMERQUEUE pQueue, uint32_t cTimers);
181#endif
182static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t iTick);
183static DECLCALLBACK(int) tmR3SetWarpDrive(PUVM pUVM, uint32_t u32Percent);
184#ifndef VBOX_WITHOUT_NS_ACCOUNTING
185static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, TMTIMERHANDLE hTimer, void *pvUser);
186#endif
187static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
188static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
189static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
190static DECLCALLBACK(void) tmR3InfoCpuLoad(PVM pVM, PCDBGFINFOHLP pHlp, int cArgs, char **papszArgs);
191static DECLCALLBACK(VBOXSTRICTRC) tmR3CpuTickParavirtDisable(PVM pVM, PVMCPU pVCpu, void *pvData);
192static const char *tmR3GetTSCModeName(PVM pVM);
193static const char *tmR3GetTSCModeNameEx(TMTSCMODE enmMode);
194static int tmR3TimerQueueGrow(PVM pVM, PTMTIMERQUEUE pQueue, uint32_t cNewTimers);
195
196
197/**
198 * Initializes the TM.
199 *
200 * @returns VBox status code.
201 * @param pVM The cross context VM structure.
202 */
203VMM_INT_DECL(int) TMR3Init(PVM pVM)
204{
205 LogFlow(("TMR3Init:\n"));
206
207 /*
208 * Assert alignment and sizes.
209 */
210 AssertCompileMemberAlignment(VM, tm.s, 32);
211 AssertCompile(sizeof(pVM->tm.s) <= sizeof(pVM->tm.padding));
212 AssertCompileMemberAlignment(TM, VirtualSyncLock, 8);
213
214 /*
215 * Init the structure.
216 */
217 pVM->tm.s.idTimerCpu = pVM->cCpus - 1; /* The last CPU. */
218
219 int rc = PDMR3CritSectInit(pVM, &pVM->tm.s.VirtualSyncLock, RT_SRC_POS, "TM VirtualSync Lock");
220 AssertLogRelRCReturn(rc, rc);
221
222 strcpy(pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL].szName, "virtual");
223 strcpy(pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC].szName, "virtual_sync"); /* Underscore is for STAM ordering issue. */
224 strcpy(pVM->tm.s.aTimerQueues[TMCLOCK_REAL].szName, "real");
225 strcpy(pVM->tm.s.aTimerQueues[TMCLOCK_TSC].szName, "tsc");
226
227 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->tm.s.aTimerQueues); i++)
228 {
229 Assert(pVM->tm.s.aTimerQueues[i].szName[0] != '\0');
230 pVM->tm.s.aTimerQueues[i].enmClock = (TMCLOCK)i;
231 pVM->tm.s.aTimerQueues[i].u64Expire = INT64_MAX;
232 pVM->tm.s.aTimerQueues[i].idxActive = UINT32_MAX;
233 pVM->tm.s.aTimerQueues[i].idxSchedule = UINT32_MAX;
234 pVM->tm.s.aTimerQueues[i].idxFreeHint = 1;
235 pVM->tm.s.aTimerQueues[i].fBeingProcessed = false;
236 pVM->tm.s.aTimerQueues[i].fCannotGrow = false;
237 pVM->tm.s.aTimerQueues[i].hThread = NIL_RTTHREAD;
238 pVM->tm.s.aTimerQueues[i].hWorkerEvt = NIL_SUPSEMEVENT;
239
240 rc = PDMR3CritSectInit(pVM, &pVM->tm.s.aTimerQueues[i].TimerLock, RT_SRC_POS,
241 "TM %s queue timer lock", pVM->tm.s.aTimerQueues[i].szName);
242 AssertLogRelRCReturn(rc, rc);
243
244 rc = PDMR3CritSectRwInit(pVM, &pVM->tm.s.aTimerQueues[i].AllocLock, RT_SRC_POS,
245 "TM %s queue alloc lock", pVM->tm.s.aTimerQueues[i].szName);
246 AssertLogRelRCReturn(rc, rc);
247 }
248
249 /*
250 * We directly use the GIP to calculate the virtual time. We map the
251 * the GIP into the guest context so we can do this calculation there
252 * as well and save costly world switches.
253 */
254 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
255 if (pGip || !SUPR3IsDriverless())
256 {
257 pVM->tm.s.pvGIPR3 = (void *)pGip;
258 AssertMsgReturn(pVM->tm.s.pvGIPR3, ("GIP support is now required!\n"), VERR_TM_GIP_REQUIRED);
259 AssertMsgReturn((pGip->u32Version >> 16) == (SUPGLOBALINFOPAGE_VERSION >> 16),
260 ("Unsupported GIP version %#x! (expected=%#x)\n", pGip->u32Version, SUPGLOBALINFOPAGE_VERSION),
261 VERR_TM_GIP_VERSION);
262
263 /* Check assumptions made in TMAllVirtual.cpp about the GIP update interval. */
264 if ( pGip->u32Magic == SUPGLOBALINFOPAGE_MAGIC
265 && pGip->u32UpdateIntervalNS >= 250000000 /* 0.25s */)
266 return VMSetError(pVM, VERR_TM_GIP_UPDATE_INTERVAL_TOO_BIG, RT_SRC_POS,
267 N_("The GIP update interval is too big. u32UpdateIntervalNS=%RU32 (u32UpdateHz=%RU32)"),
268 pGip->u32UpdateIntervalNS, pGip->u32UpdateHz);
269
270 /* Log GIP info that may come in handy. */
271 LogRel(("TM: GIP - u32Mode=%d (%s) u32UpdateHz=%u u32UpdateIntervalNS=%u enmUseTscDelta=%d (%s) fGetGipCpu=%#x cCpus=%d\n",
272 pGip->u32Mode, SUPGetGIPModeName(pGip), pGip->u32UpdateHz, pGip->u32UpdateIntervalNS,
273 pGip->enmUseTscDelta, SUPGetGIPTscDeltaModeName(pGip), pGip->fGetGipCpu, pGip->cCpus));
274 LogRel(("TM: GIP - u64CpuHz=%'RU64 (%#RX64) SUPGetCpuHzFromGip => %'RU64\n",
275 pGip->u64CpuHz, pGip->u64CpuHz, SUPGetCpuHzFromGip(pGip)));
276 for (uint32_t iCpuSet = 0; iCpuSet < RT_ELEMENTS(pGip->aiCpuFromCpuSetIdx); iCpuSet++)
277 {
278 uint16_t iGipCpu = pGip->aiCpuFromCpuSetIdx[iCpuSet];
279 if (iGipCpu != UINT16_MAX)
280 LogRel(("TM: GIP - CPU: iCpuSet=%#x idCpu=%#x idApic=%#x iGipCpu=%#x i64TSCDelta=%RI64 enmState=%d u64CpuHz=%RU64(*) cErrors=%u\n",
281 iCpuSet, pGip->aCPUs[iGipCpu].idCpu, pGip->aCPUs[iGipCpu].idApic, iGipCpu, pGip->aCPUs[iGipCpu].i64TSCDelta,
282 pGip->aCPUs[iGipCpu].enmState, pGip->aCPUs[iGipCpu].u64CpuHz, pGip->aCPUs[iGipCpu].cErrors));
283 }
284 }
285
286 /*
287 * Setup the VirtualGetRaw backend.
288 */
289 pVM->tm.s.pfnVirtualGetRawR3 = tmVirtualNanoTSRediscover;
290 pVM->tm.s.VirtualGetRawDataR3.pfnRediscover = tmVirtualNanoTSRediscover;
291 pVM->tm.s.VirtualGetRawDataR3.pfnBad = tmVirtualNanoTSBad;
292 pVM->tm.s.VirtualGetRawDataR3.pfnBadCpuIndex = tmVirtualNanoTSBadCpuIndex;
293 pVM->tm.s.VirtualGetRawDataR3.pu64Prev = &pVM->tm.s.u64VirtualRawPrev;
294 if (!SUPR3IsDriverless())
295 {
296 pVM->tm.s.VirtualGetRawDataR0.pu64Prev = MMHyperR3ToR0(pVM, (void *)&pVM->tm.s.u64VirtualRawPrev);
297 AssertRelease(pVM->tm.s.VirtualGetRawDataR0.pu64Prev);
298 }
299 /* The rest is done in TMR3InitFinalize() since it's too early to call PDM. */
300
301 /*
302 * Get our CFGM node, create it if necessary.
303 */
304 PCFGMNODE pCfgHandle = CFGMR3GetChild(CFGMR3GetRoot(pVM), "TM");
305 if (!pCfgHandle)
306 {
307 rc = CFGMR3InsertNode(CFGMR3GetRoot(pVM), "TM", &pCfgHandle);
308 AssertRCReturn(rc, rc);
309 }
310
311 /*
312 * Specific errors about some obsolete TM settings (remove after 2015-12-03).
313 */
314 if (CFGMR3Exists(pCfgHandle, "TSCVirtualized"))
315 return VMSetError(pVM, VERR_CFGM_CONFIG_UNKNOWN_VALUE, RT_SRC_POS,
316 N_("Configuration error: TM setting \"TSCVirtualized\" is no longer supported. Use the \"TSCMode\" setting instead."));
317 if (CFGMR3Exists(pCfgHandle, "UseRealTSC"))
318 return VMSetError(pVM, VERR_CFGM_CONFIG_UNKNOWN_VALUE, RT_SRC_POS,
319 N_("Configuration error: TM setting \"UseRealTSC\" is no longer supported. Use the \"TSCMode\" setting instead."));
320
321 if (CFGMR3Exists(pCfgHandle, "MaybeUseOffsettedHostTSC"))
322 return VMSetError(pVM, VERR_CFGM_CONFIG_UNKNOWN_VALUE, RT_SRC_POS,
323 N_("Configuration error: TM setting \"MaybeUseOffsettedHostTSC\" is no longer supported. Use the \"TSCMode\" setting instead."));
324
325 /*
326 * Validate the rest of the TM settings.
327 */
328 rc = CFGMR3ValidateConfig(pCfgHandle, "/TM/",
329 "TSCMode|"
330 "TSCModeSwitchAllowed|"
331 "TSCTicksPerSecond|"
332 "TSCTiedToExecution|"
333 "TSCNotTiedToHalt|"
334 "ScheduleSlack|"
335 "CatchUpStopThreshold|"
336 "CatchUpGiveUpThreshold|"
337 "CatchUpStartThreshold0|CatchUpStartThreshold1|CatchUpStartThreshold2|CatchUpStartThreshold3|"
338 "CatchUpStartThreshold4|CatchUpStartThreshold5|CatchUpStartThreshold6|CatchUpStartThreshold7|"
339 "CatchUpStartThreshold8|CatchUpStartThreshold9|"
340 "CatchUpPrecentage0|CatchUpPrecentage1|CatchUpPrecentage2|CatchUpPrecentage3|"
341 "CatchUpPrecentage4|CatchUpPrecentage5|CatchUpPrecentage6|CatchUpPrecentage7|"
342 "CatchUpPrecentage8|CatchUpPrecentage9|"
343 "UTCOffset|"
344 "UTCTouchFileOnJump|"
345 "WarpDrivePercentage|"
346 "HostHzMax|"
347 "HostHzFudgeFactorTimerCpu|"
348 "HostHzFudgeFactorOtherCpu|"
349 "HostHzFudgeFactorCatchUp100|"
350 "HostHzFudgeFactorCatchUp200|"
351 "HostHzFudgeFactorCatchUp400|"
352 "TimerMillies"
353 ,
354 "",
355 "TM", 0);
356 if (RT_FAILURE(rc))
357 return rc;
358
359 /*
360 * Determine the TSC configuration and frequency.
361 */
362 /** @cfgm{/TM/TSCMode, string, Depends on the CPU and VM config}
363 * The name of the TSC mode to use: VirtTSCEmulated, RealTSCOffset or Dynamic.
364 * The default depends on the VM configuration and the capabilities of the
365 * host CPU. Other config options or runtime changes may override the TSC
366 * mode specified here.
367 */
368 char szTSCMode[32];
369 rc = CFGMR3QueryString(pCfgHandle, "TSCMode", szTSCMode, sizeof(szTSCMode));
370 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
371 {
372 /** @todo Rainy-day/never: Dynamic mode isn't currently suitable for SMP VMs, so
373 * fall back on the more expensive emulated mode. With the current TSC handling
374 * (frequent switching between offsetted mode and taking VM exits, on all VCPUs
375 * without any kind of coordination) will lead to inconsistent TSC behavior with
376 * guest SMP, including TSC going backwards. */
377 pVM->tm.s.enmTSCMode = NEMR3NeedSpecialTscMode(pVM) ? TMTSCMODE_NATIVE_API
378 : pVM->cCpus == 1 && tmR3HasFixedTSC(pVM) ? TMTSCMODE_DYNAMIC : TMTSCMODE_VIRT_TSC_EMULATED;
379 }
380 else if (RT_FAILURE(rc))
381 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying string value \"TSCMode\""));
382 else
383 {
384 if (!RTStrCmp(szTSCMode, "VirtTSCEmulated"))
385 pVM->tm.s.enmTSCMode = TMTSCMODE_VIRT_TSC_EMULATED;
386 else if (!RTStrCmp(szTSCMode, "RealTSCOffset"))
387 pVM->tm.s.enmTSCMode = TMTSCMODE_REAL_TSC_OFFSET;
388 else if (!RTStrCmp(szTSCMode, "Dynamic"))
389 pVM->tm.s.enmTSCMode = TMTSCMODE_DYNAMIC;
390 else
391 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Unrecognized TM TSC mode value \"%s\""), szTSCMode);
392 if (NEMR3NeedSpecialTscMode(pVM))
393 {
394 LogRel(("TM: NEM overrides the /TM/TSCMode=%s settings.\n", szTSCMode));
395 pVM->tm.s.enmTSCMode = TMTSCMODE_NATIVE_API;
396 }
397 }
398
399 /**
400 * @cfgm{/TM/TSCModeSwitchAllowed, bool, Whether TM TSC mode switch is allowed
401 * at runtime}
402 * When using paravirtualized guests, we dynamically switch TSC modes to a more
403 * optimal one for performance. This setting allows overriding this behaviour.
404 */
405 rc = CFGMR3QueryBool(pCfgHandle, "TSCModeSwitchAllowed", &pVM->tm.s.fTSCModeSwitchAllowed);
406 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
407 {
408 /* This is finally determined in TMR3InitFinalize() as GIM isn't initialized yet. */
409 pVM->tm.s.fTSCModeSwitchAllowed = true;
410 }
411 else if (RT_FAILURE(rc))
412 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying bool value \"TSCModeSwitchAllowed\""));
413 if (pVM->tm.s.fTSCModeSwitchAllowed && pVM->tm.s.enmTSCMode == TMTSCMODE_NATIVE_API)
414 {
415 LogRel(("TM: NEM overrides the /TM/TSCModeSwitchAllowed setting.\n"));
416 pVM->tm.s.fTSCModeSwitchAllowed = false;
417 }
418
419 /** @cfgm{/TM/TSCTicksPerSecond, uint32_t, Current TSC frequency from GIP}
420 * The number of TSC ticks per second (i.e. the TSC frequency). This will
421 * override enmTSCMode.
422 */
423 pVM->tm.s.cTSCTicksPerSecondHost = tmR3CalibrateTSC();
424 rc = CFGMR3QueryU64(pCfgHandle, "TSCTicksPerSecond", &pVM->tm.s.cTSCTicksPerSecond);
425 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
426 {
427 pVM->tm.s.cTSCTicksPerSecond = pVM->tm.s.cTSCTicksPerSecondHost;
428 if ( ( pVM->tm.s.enmTSCMode == TMTSCMODE_DYNAMIC
429 || pVM->tm.s.enmTSCMode == TMTSCMODE_VIRT_TSC_EMULATED)
430 && pVM->tm.s.cTSCTicksPerSecond >= _4G)
431 {
432 pVM->tm.s.cTSCTicksPerSecond = _4G - 1; /* (A limitation of our math code) */
433 pVM->tm.s.enmTSCMode = TMTSCMODE_VIRT_TSC_EMULATED;
434 }
435 }
436 else if (RT_FAILURE(rc))
437 return VMSetError(pVM, rc, RT_SRC_POS,
438 N_("Configuration error: Failed to querying uint64_t value \"TSCTicksPerSecond\""));
439 else if ( pVM->tm.s.cTSCTicksPerSecond < _1M
440 || pVM->tm.s.cTSCTicksPerSecond >= _4G)
441 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
442 N_("Configuration error: \"TSCTicksPerSecond\" = %RI64 is not in the range 1MHz..4GHz-1"),
443 pVM->tm.s.cTSCTicksPerSecond);
444 else if (pVM->tm.s.enmTSCMode != TMTSCMODE_NATIVE_API)
445 pVM->tm.s.enmTSCMode = TMTSCMODE_VIRT_TSC_EMULATED;
446 else
447 {
448 LogRel(("TM: NEM overrides the /TM/TSCTicksPerSecond=%RU64 setting.\n", pVM->tm.s.cTSCTicksPerSecond));
449 pVM->tm.s.cTSCTicksPerSecond = pVM->tm.s.cTSCTicksPerSecondHost;
450 }
451
452 /** @cfgm{/TM/TSCTiedToExecution, bool, false}
453 * Whether the TSC should be tied to execution. This will exclude most of the
454 * virtualization overhead, but will by default include the time spent in the
455 * halt state (see TM/TSCNotTiedToHalt). This setting will override all other
456 * TSC settings except for TSCTicksPerSecond and TSCNotTiedToHalt, which should
457 * be used avoided or used with great care. Note that this will only work right
458 * together with VT-x or AMD-V, and with a single virtual CPU. */
459 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCTiedToExecution", &pVM->tm.s.fTSCTiedToExecution, false);
460 if (RT_FAILURE(rc))
461 return VMSetError(pVM, rc, RT_SRC_POS,
462 N_("Configuration error: Failed to querying bool value \"TSCTiedToExecution\""));
463 if (pVM->tm.s.fTSCTiedToExecution && pVM->tm.s.enmTSCMode == TMTSCMODE_NATIVE_API)
464 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS, N_("/TM/TSCTiedToExecution is not supported in NEM mode!"));
465 if (pVM->tm.s.fTSCTiedToExecution)
466 pVM->tm.s.enmTSCMode = TMTSCMODE_VIRT_TSC_EMULATED;
467
468
469 /** @cfgm{/TM/TSCNotTiedToHalt, bool, false}
470 * This is used with /TM/TSCTiedToExecution to control how TSC operates
471 * accross HLT instructions. When true HLT is considered execution time and
472 * TSC continues to run, while when false (default) TSC stops during halt. */
473 rc = CFGMR3QueryBoolDef(pCfgHandle, "TSCNotTiedToHalt", &pVM->tm.s.fTSCNotTiedToHalt, false);
474 if (RT_FAILURE(rc))
475 return VMSetError(pVM, rc, RT_SRC_POS,
476 N_("Configuration error: Failed to querying bool value \"TSCNotTiedToHalt\""));
477
478 /*
479 * Configure the timer synchronous virtual time.
480 */
481 /** @cfgm{/TM/ScheduleSlack, uint32_t, ns, 0, UINT32_MAX, 100000}
482 * Scheduling slack when processing timers. */
483 rc = CFGMR3QueryU32(pCfgHandle, "ScheduleSlack", &pVM->tm.s.u32VirtualSyncScheduleSlack);
484 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
485 pVM->tm.s.u32VirtualSyncScheduleSlack = 100000; /* 0.100ms (ASSUMES virtual time is nanoseconds) */
486 else if (RT_FAILURE(rc))
487 return VMSetError(pVM, rc, RT_SRC_POS,
488 N_("Configuration error: Failed to querying 32-bit integer value \"ScheduleSlack\""));
489
490 /** @cfgm{/TM/CatchUpStopThreshold, uint64_t, ns, 0, UINT64_MAX, 500000}
491 * When to stop a catch-up, considering it successful. */
492 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStopThreshold", &pVM->tm.s.u64VirtualSyncCatchUpStopThreshold);
493 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
494 pVM->tm.s.u64VirtualSyncCatchUpStopThreshold = 500000; /* 0.5ms */
495 else if (RT_FAILURE(rc))
496 return VMSetError(pVM, rc, RT_SRC_POS,
497 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpStopThreshold\""));
498
499 /** @cfgm{/TM/CatchUpGiveUpThreshold, uint64_t, ns, 0, UINT64_MAX, 60000000000}
500 * When to give up a catch-up attempt. */
501 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpGiveUpThreshold", &pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold);
502 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
503 pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold = UINT64_C(60000000000); /* 60 sec */
504 else if (RT_FAILURE(rc))
505 return VMSetError(pVM, rc, RT_SRC_POS,
506 N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpGiveUpThreshold\""));
507
508
509 /** @cfgm{/TM/CatchUpPrecentage[0..9], uint32_t, %, 1, 2000, various}
510 * The catch-up percent for a given period. */
511 /** @cfgm{/TM/CatchUpStartThreshold[0..9], uint64_t, ns, 0, UINT64_MAX}
512 * The catch-up period threshold, or if you like, when a period starts. */
513#define TM_CFG_PERIOD(iPeriod, DefStart, DefPct) \
514 do \
515 { \
516 uint64_t u64; \
517 rc = CFGMR3QueryU64(pCfgHandle, "CatchUpStartThreshold" #iPeriod, &u64); \
518 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
519 u64 = UINT64_C(DefStart); \
520 else if (RT_FAILURE(rc)) \
521 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 64-bit integer value \"CatchUpThreshold" #iPeriod "\"")); \
522 if ( (iPeriod > 0 && u64 <= pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod - 1].u64Start) \
523 || u64 >= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold) \
524 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS, N_("Configuration error: Invalid start of period #" #iPeriod ": %'RU64"), u64); \
525 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u64Start = u64; \
526 rc = CFGMR3QueryU32(pCfgHandle, "CatchUpPrecentage" #iPeriod, &pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage); \
527 if (rc == VERR_CFGM_VALUE_NOT_FOUND) \
528 pVM->tm.s.aVirtualSyncCatchUpPeriods[iPeriod].u32Percentage = (DefPct); \
529 else if (RT_FAILURE(rc)) \
530 return VMSetError(pVM, rc, RT_SRC_POS, N_("Configuration error: Failed to querying 32-bit integer value \"CatchUpPrecentage" #iPeriod "\"")); \
531 } while (0)
532 /* This needs more tuning. Not sure if we really need so many period and be so gentle. */
533 TM_CFG_PERIOD(0, 750000, 5); /* 0.75ms at 1.05x */
534 TM_CFG_PERIOD(1, 1500000, 10); /* 1.50ms at 1.10x */
535 TM_CFG_PERIOD(2, 8000000, 25); /* 8ms at 1.25x */
536 TM_CFG_PERIOD(3, 30000000, 50); /* 30ms at 1.50x */
537 TM_CFG_PERIOD(4, 75000000, 75); /* 75ms at 1.75x */
538 TM_CFG_PERIOD(5, 175000000, 100); /* 175ms at 2x */
539 TM_CFG_PERIOD(6, 500000000, 200); /* 500ms at 3x */
540 TM_CFG_PERIOD(7, 3000000000, 300); /* 3s at 4x */
541 TM_CFG_PERIOD(8,30000000000, 400); /* 30s at 5x */
542 TM_CFG_PERIOD(9,55000000000, 500); /* 55s at 6x */
543 AssertCompile(RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods) == 10);
544#undef TM_CFG_PERIOD
545
546 /*
547 * Configure real world time (UTC).
548 */
549 /** @cfgm{/TM/UTCOffset, int64_t, ns, INT64_MIN, INT64_MAX, 0}
550 * The UTC offset. This is used to put the guest back or forwards in time. */
551 rc = CFGMR3QueryS64(pCfgHandle, "UTCOffset", &pVM->tm.s.offUTC);
552 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
553 pVM->tm.s.offUTC = 0; /* ns */
554 else if (RT_FAILURE(rc))
555 return VMSetError(pVM, rc, RT_SRC_POS,
556 N_("Configuration error: Failed to querying 64-bit integer value \"UTCOffset\""));
557
558 /** @cfgm{/TM/UTCTouchFileOnJump, string, none}
559 * File to be written to everytime the host time jumps. */
560 rc = CFGMR3QueryStringAlloc(pCfgHandle, "UTCTouchFileOnJump", &pVM->tm.s.pszUtcTouchFileOnJump);
561 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
562 pVM->tm.s.pszUtcTouchFileOnJump = NULL;
563 else if (RT_FAILURE(rc))
564 return VMSetError(pVM, rc, RT_SRC_POS,
565 N_("Configuration error: Failed to querying string value \"UTCTouchFileOnJump\""));
566
567 /*
568 * Setup the warp drive.
569 */
570 /** @cfgm{/TM/WarpDrivePercentage, uint32_t, %, 0, 20000, 100}
571 * The warp drive percentage, 100% is normal speed. This is used to speed up
572 * or slow down the virtual clock, which can be useful for fast forwarding
573 * borring periods during tests. */
574 rc = CFGMR3QueryU32(pCfgHandle, "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage);
575 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
576 rc = CFGMR3QueryU32(CFGMR3GetRoot(pVM), "WarpDrivePercentage", &pVM->tm.s.u32VirtualWarpDrivePercentage); /* legacy */
577 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
578 pVM->tm.s.u32VirtualWarpDrivePercentage = 100;
579 else if (RT_FAILURE(rc))
580 return VMSetError(pVM, rc, RT_SRC_POS,
581 N_("Configuration error: Failed to querying uint32_t value \"WarpDrivePercent\""));
582 else if ( pVM->tm.s.u32VirtualWarpDrivePercentage < 2
583 || pVM->tm.s.u32VirtualWarpDrivePercentage > 20000)
584 return VMSetError(pVM, VERR_INVALID_PARAMETER, RT_SRC_POS,
585 N_("Configuration error: \"WarpDrivePercent\" = %RI32 is not in the range 2..20000"),
586 pVM->tm.s.u32VirtualWarpDrivePercentage);
587 pVM->tm.s.fVirtualWarpDrive = pVM->tm.s.u32VirtualWarpDrivePercentage != 100;
588 if (pVM->tm.s.fVirtualWarpDrive)
589 {
590 if (pVM->tm.s.enmTSCMode == TMTSCMODE_NATIVE_API)
591 LogRel(("TM: Warp-drive active, escept for TSC which is in NEM mode. u32VirtualWarpDrivePercentage=%RI32\n",
592 pVM->tm.s.u32VirtualWarpDrivePercentage));
593 else
594 {
595 pVM->tm.s.enmTSCMode = TMTSCMODE_VIRT_TSC_EMULATED;
596 LogRel(("TM: Warp-drive active. u32VirtualWarpDrivePercentage=%RI32\n", pVM->tm.s.u32VirtualWarpDrivePercentage));
597 }
598 }
599
600 /*
601 * Gather the Host Hz configuration values.
602 */
603 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzMax", &pVM->tm.s.cHostHzMax, 20000);
604 if (RT_FAILURE(rc))
605 return VMSetError(pVM, rc, RT_SRC_POS,
606 N_("Configuration error: Failed to querying uint32_t value \"HostHzMax\""));
607
608 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorTimerCpu", &pVM->tm.s.cPctHostHzFudgeFactorTimerCpu, 111);
609 if (RT_FAILURE(rc))
610 return VMSetError(pVM, rc, RT_SRC_POS,
611 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorTimerCpu\""));
612
613 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorOtherCpu", &pVM->tm.s.cPctHostHzFudgeFactorOtherCpu, 110);
614 if (RT_FAILURE(rc))
615 return VMSetError(pVM, rc, RT_SRC_POS,
616 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorOtherCpu\""));
617
618 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp100", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp100, 300);
619 if (RT_FAILURE(rc))
620 return VMSetError(pVM, rc, RT_SRC_POS,
621 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp100\""));
622
623 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp200", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp200, 250);
624 if (RT_FAILURE(rc))
625 return VMSetError(pVM, rc, RT_SRC_POS,
626 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp200\""));
627
628 rc = CFGMR3QueryU32Def(pCfgHandle, "HostHzFudgeFactorCatchUp400", &pVM->tm.s.cPctHostHzFudgeFactorCatchUp400, 200);
629 if (RT_FAILURE(rc))
630 return VMSetError(pVM, rc, RT_SRC_POS,
631 N_("Configuration error: Failed to querying uint32_t value \"HostHzFudgeFactorCatchUp400\""));
632
633 /*
634 * Finally, setup and report.
635 */
636 pVM->tm.s.enmOriginalTSCMode = pVM->tm.s.enmTSCMode;
637 CPUMR3SetCR4Feature(pVM, X86_CR4_TSD, ~X86_CR4_TSD);
638 LogRel(("TM: cTSCTicksPerSecond=%'RU64 (%#RX64) enmTSCMode=%d (%s)\n"
639 "TM: cTSCTicksPerSecondHost=%'RU64 (%#RX64)\n"
640 "TM: TSCTiedToExecution=%RTbool TSCNotTiedToHalt=%RTbool\n",
641 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.enmTSCMode, tmR3GetTSCModeName(pVM),
642 pVM->tm.s.cTSCTicksPerSecondHost, pVM->tm.s.cTSCTicksPerSecondHost,
643 pVM->tm.s.fTSCTiedToExecution, pVM->tm.s.fTSCNotTiedToHalt));
644
645 /*
646 * Start the timer (guard against REM not yielding).
647 */
648 /** @cfgm{/TM/TimerMillies, uint32_t, ms, 1, 1000, 10}
649 * The watchdog timer interval. */
650 uint32_t u32Millies;
651 rc = CFGMR3QueryU32(pCfgHandle, "TimerMillies", &u32Millies);
652 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
653 u32Millies = VM_IS_HM_ENABLED(pVM) ? 1000 : 10;
654 else if (RT_FAILURE(rc))
655 return VMSetError(pVM, rc, RT_SRC_POS,
656 N_("Configuration error: Failed to query uint32_t value \"TimerMillies\""));
657 rc = RTTimerCreate(&pVM->tm.s.pTimer, u32Millies, tmR3TimerCallback, pVM);
658 if (RT_FAILURE(rc))
659 {
660 AssertMsgFailed(("Failed to create timer, u32Millies=%d rc=%Rrc.\n", u32Millies, rc));
661 return rc;
662 }
663 Log(("TM: Created timer %p firing every %d milliseconds\n", pVM->tm.s.pTimer, u32Millies));
664 pVM->tm.s.u32TimerMillies = u32Millies;
665
666 /*
667 * Register saved state.
668 */
669 rc = SSMR3RegisterInternal(pVM, "tm", 1, TM_SAVED_STATE_VERSION, sizeof(uint64_t) * 8,
670 NULL, NULL, NULL,
671 NULL, tmR3Save, NULL,
672 NULL, tmR3Load, NULL);
673 if (RT_FAILURE(rc))
674 return rc;
675
676 /*
677 * Register statistics.
678 */
679 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.c1nsSteps,STAMTYPE_U32, "/TM/R3/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
680 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR3.cBadPrev, STAMTYPE_U32, "/TM/R3/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
681 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.c1nsSteps,STAMTYPE_U32, "/TM/R0/1nsSteps", STAMUNIT_OCCURENCES, "Virtual time 1ns steps (due to TSC / GIP variations).");
682 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.VirtualGetRawDataR0.cBadPrev, STAMTYPE_U32, "/TM/R0/cBadPrev", STAMUNIT_OCCURENCES, "Times the previous virtual time was considered erratic (shouldn't ever happen).");
683 STAM_REL_REG( pVM,(void*)&pVM->tm.s.offVirtualSync, STAMTYPE_U64, "/TM/VirtualSync/CurrentOffset", STAMUNIT_NS, "The current offset. (subtract GivenUp to get the lag)");
684 STAM_REL_REG_USED(pVM,(void*)&pVM->tm.s.offVirtualSyncGivenUp, STAMTYPE_U64, "/TM/VirtualSync/GivenUp", STAMUNIT_NS, "Nanoseconds of the 'CurrentOffset' that's been given up and won't ever be attempted caught up with.");
685 STAM_REL_REG( pVM,(void*)&pVM->tm.s.HzHint.s.uMax, STAMTYPE_U32, "/TM/MaxHzHint", STAMUNIT_HZ, "Max guest timer frequency hint.");
686 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->tm.s.aTimerQueues); i++)
687 {
688 rc = STAMR3RegisterF(pVM, (void *)&pVM->tm.s.aTimerQueues[i].uMaxHzHint, STAMTYPE_U32, STAMVISIBILITY_ALWAYS, STAMUNIT_HZ,
689 "", "/TM/MaxHzHint/%s", pVM->tm.s.aTimerQueues[i].szName);
690 AssertRC(rc);
691 }
692
693#ifdef VBOX_WITH_STATISTICS
694 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cExpired, STAMTYPE_U32, "/TM/R3/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
695 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR3.cUpdateRaces,STAMTYPE_U32, "/TM/R3/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
696 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cExpired, STAMTYPE_U32, "/TM/R0/cExpired", STAMUNIT_OCCURENCES, "Times the TSC interval expired (overlaps 1ns steps).");
697 STAM_REG_USED(pVM,(void *)&pVM->tm.s.VirtualGetRawDataR0.cUpdateRaces,STAMTYPE_U32, "/TM/R0/cUpdateRaces", STAMUNIT_OCCURENCES, "Thread races when updating the previous timestamp.");
698 STAM_REG(pVM, &pVM->tm.s.StatDoQueues, STAMTYPE_PROFILE, "/TM/DoQueues", STAMUNIT_TICKS_PER_CALL, "Profiling timer TMR3TimerQueuesDo.");
699 STAM_REG(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL].StatDo, STAMTYPE_PROFILE, "/TM/DoQueues/Virtual", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual clock queue.");
700 STAM_REG(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC].StatDo,STAMTYPE_PROFILE,"/TM/DoQueues/VirtualSync", STAMUNIT_TICKS_PER_CALL, "Time spent on the virtual sync clock queue.");
701 STAM_REG(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_REAL].StatDo, STAMTYPE_PROFILE, "/TM/DoQueues/Real", STAMUNIT_TICKS_PER_CALL, "Time spent on the real clock queue.");
702
703 STAM_REG(pVM, &pVM->tm.s.StatPoll, STAMTYPE_COUNTER, "/TM/Poll", STAMUNIT_OCCURENCES, "TMTimerPoll calls.");
704 STAM_REG(pVM, &pVM->tm.s.StatPollAlreadySet, STAMTYPE_COUNTER, "/TM/Poll/AlreadySet", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the FF was already set.");
705 STAM_REG(pVM, &pVM->tm.s.StatPollELoop, STAMTYPE_COUNTER, "/TM/Poll/ELoop", STAMUNIT_OCCURENCES, "Times TMTimerPoll has given up getting a consistent virtual sync data set.");
706 STAM_REG(pVM, &pVM->tm.s.StatPollMiss, STAMTYPE_COUNTER, "/TM/Poll/Miss", STAMUNIT_OCCURENCES, "TMTimerPoll calls where nothing had expired.");
707 STAM_REG(pVM, &pVM->tm.s.StatPollRunning, STAMTYPE_COUNTER, "/TM/Poll/Running", STAMUNIT_OCCURENCES, "TMTimerPoll calls where the queues were being run.");
708 STAM_REG(pVM, &pVM->tm.s.StatPollSimple, STAMTYPE_COUNTER, "/TM/Poll/Simple", STAMUNIT_OCCURENCES, "TMTimerPoll calls where we could take the simple path.");
709 STAM_REG(pVM, &pVM->tm.s.StatPollVirtual, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtual", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL queue.");
710 STAM_REG(pVM, &pVM->tm.s.StatPollVirtualSync, STAMTYPE_COUNTER, "/TM/Poll/HitsVirtualSync", STAMUNIT_OCCURENCES, "The number of times TMTimerPoll found an expired TMCLOCK_VIRTUAL_SYNC queue.");
711
712 STAM_REG(pVM, &pVM->tm.s.StatPostponedR3, STAMTYPE_COUNTER, "/TM/PostponedR3", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-3.");
713 STAM_REG(pVM, &pVM->tm.s.StatPostponedRZ, STAMTYPE_COUNTER, "/TM/PostponedRZ", STAMUNIT_OCCURENCES, "Postponed due to unschedulable state, in ring-0 / RC.");
714
715 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneR3, STAMTYPE_PROFILE, "/TM/ScheduleOneR3", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
716 STAM_REG(pVM, &pVM->tm.s.StatScheduleOneRZ, STAMTYPE_PROFILE, "/TM/ScheduleOneRZ", STAMUNIT_TICKS_PER_CALL, "Profiling the scheduling of one queue during a TMTimer* call in EMT.");
717 STAM_REG(pVM, &pVM->tm.s.StatScheduleSetFF, STAMTYPE_COUNTER, "/TM/ScheduleSetFF", STAMUNIT_OCCURENCES, "The number of times the timer FF was set instead of doing scheduling.");
718
719 STAM_REG(pVM, &pVM->tm.s.StatTimerSet, STAMTYPE_COUNTER, "/TM/TimerSet", STAMUNIT_OCCURENCES, "Calls, except virtual sync timers");
720 STAM_REG(pVM, &pVM->tm.s.StatTimerSetOpt, STAMTYPE_COUNTER, "/TM/TimerSet/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
721 STAM_REG(pVM, &pVM->tm.s.StatTimerSetR3, STAMTYPE_PROFILE, "/TM/TimerSet/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3.");
722 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRZ, STAMTYPE_PROFILE, "/TM/TimerSet/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC.");
723 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStActive, STAMTYPE_COUNTER, "/TM/TimerSet/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
724 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSet/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
725 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStOther, STAMTYPE_COUNTER, "/TM/TimerSet/StOther", STAMUNIT_OCCURENCES, "Other states");
726 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStop, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
727 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendStopSched", STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
728 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendSched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
729 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStPendResched, STAMTYPE_COUNTER, "/TM/TimerSet/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
730 STAM_REG(pVM, &pVM->tm.s.StatTimerSetStStopped, STAMTYPE_COUNTER, "/TM/TimerSet/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
731
732 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVs, STAMTYPE_COUNTER, "/TM/TimerSetVs", STAMUNIT_OCCURENCES, "TMTimerSet calls on virtual sync timers");
733 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsR3, STAMTYPE_PROFILE, "/TM/TimerSetVs/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-3 on virtual sync timers.");
734 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsRZ, STAMTYPE_PROFILE, "/TM/TimerSetVs/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSet calls made in ring-0 / RC on virtual sync timers.");
735 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsStActive, STAMTYPE_COUNTER, "/TM/TimerSetVs/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
736 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetVs/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
737 STAM_REG(pVM, &pVM->tm.s.StatTimerSetVsStStopped, STAMTYPE_COUNTER, "/TM/TimerSetVs/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
738
739 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelative, STAMTYPE_COUNTER, "/TM/TimerSetRelative", STAMUNIT_OCCURENCES, "Calls, except virtual sync timers");
740 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeOpt, STAMTYPE_COUNTER, "/TM/TimerSetRelative/Opt", STAMUNIT_OCCURENCES, "Optimized path taken.");
741 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeR3, STAMTYPE_PROFILE, "/TM/TimerSetRelative/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetRelative calls made in ring-3 (sans virtual sync).");
742 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeRZ, STAMTYPE_PROFILE, "/TM/TimerSetRelative/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetReltaive calls made in ring-0 / RC (sans virtual sync).");
743 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStActive, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
744 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
745 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStOther, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StOther", STAMUNIT_OCCURENCES, "Other states");
746 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStop, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStop", STAMUNIT_OCCURENCES, "PENDING_STOP");
747 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendStopSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendStopSched",STAMUNIT_OCCURENCES, "PENDING_STOP_SCHEDULE");
748 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendSched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendSched", STAMUNIT_OCCURENCES, "PENDING_SCHEDULE");
749 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStPendResched, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StPendResched", STAMUNIT_OCCURENCES, "PENDING_RESCHEDULE");
750 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeStStopped, STAMTYPE_COUNTER, "/TM/TimerSetRelative/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
751
752 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVs, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs", STAMUNIT_OCCURENCES, "TMTimerSetRelative calls on virtual sync timers");
753 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsR3, STAMTYPE_PROFILE, "/TM/TimerSetRelativeVs/R3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetRelative calls made in ring-3 on virtual sync timers.");
754 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsRZ, STAMTYPE_PROFILE, "/TM/TimerSetRelativeVs/RZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerSetReltaive calls made in ring-0 / RC on virtual sync timers.");
755 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsStActive, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs/StActive", STAMUNIT_OCCURENCES, "ACTIVE");
756 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsStExpDeliver, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs/StExpDeliver", STAMUNIT_OCCURENCES, "EXPIRED_DELIVER");
757 STAM_REG(pVM, &pVM->tm.s.StatTimerSetRelativeVsStStopped, STAMTYPE_COUNTER, "/TM/TimerSetRelativeVs/StStopped", STAMUNIT_OCCURENCES, "STOPPED");
758
759 STAM_REG(pVM, &pVM->tm.s.StatTimerStopR3, STAMTYPE_PROFILE, "/TM/TimerStopR3", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-3.");
760 STAM_REG(pVM, &pVM->tm.s.StatTimerStopRZ, STAMTYPE_PROFILE, "/TM/TimerStopRZ", STAMUNIT_TICKS_PER_CALL, "Profiling TMTimerStop calls made in ring-0 / RC.");
761
762 STAM_REG(pVM, &pVM->tm.s.StatVirtualGet, STAMTYPE_COUNTER, "/TM/VirtualGet", STAMUNIT_OCCURENCES, "The number of times TMTimerGet was called when the clock was running.");
763 STAM_REG(pVM, &pVM->tm.s.StatVirtualGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualGetSetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling TMTimerGet.");
764 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGet, STAMTYPE_COUNTER, "/TM/VirtualSyncGet", STAMUNIT_OCCURENCES, "The number of times tmVirtualSyncGetEx was called.");
765 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetAdjLast, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/AdjLast", STAMUNIT_OCCURENCES, "Times we've adjusted against the last returned time stamp .");
766 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetELoop, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/ELoop", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx has given up getting a consistent virtual sync data set.");
767 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetExpired, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Expired", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx encountered an expired timer stopping the clock.");
768 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLocked, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Locked", STAMUNIT_OCCURENCES, "Times we successfully acquired the lock in tmVirtualSyncGetEx.");
769 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetLockless, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/Lockless", STAMUNIT_OCCURENCES, "Times tmVirtualSyncGetEx returned without needing to take the lock.");
770 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGetSetFF, STAMTYPE_COUNTER, "/TM/VirtualSyncGet/SetFF", STAMUNIT_OCCURENCES, "Times we set the FF when calling tmVirtualSyncGetEx.");
771 STAM_REG(pVM, &pVM->tm.s.StatVirtualPause, STAMTYPE_COUNTER, "/TM/VirtualPause", STAMUNIT_OCCURENCES, "The number of times TMR3TimerPause was called.");
772 STAM_REG(pVM, &pVM->tm.s.StatVirtualResume, STAMTYPE_COUNTER, "/TM/VirtualResume", STAMUNIT_OCCURENCES, "The number of times TMR3TimerResume was called.");
773
774 STAM_REG(pVM, &pVM->tm.s.StatTimerCallbackSetFF, STAMTYPE_COUNTER, "/TM/CallbackSetFF", STAMUNIT_OCCURENCES, "The number of times the timer callback set FF.");
775 STAM_REG(pVM, &pVM->tm.s.StatTimerCallback, STAMTYPE_COUNTER, "/TM/Callback", STAMUNIT_OCCURENCES, "The number of times the timer callback is invoked.");
776
777 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE010, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE010", STAMUNIT_OCCURENCES, "In catch-up mode, 10% or lower.");
778 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE025, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE025", STAMUNIT_OCCURENCES, "In catch-up mode, 25%-11%.");
779 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupLE100, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupLE100", STAMUNIT_OCCURENCES, "In catch-up mode, 100%-26%.");
780 STAM_REG(pVM, &pVM->tm.s.StatTSCCatchupOther, STAMTYPE_COUNTER, "/TM/TSC/Intercept/CatchupOther", STAMUNIT_OCCURENCES, "In catch-up mode, > 100%.");
781 STAM_REG(pVM, &pVM->tm.s.StatTSCNotFixed, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotFixed", STAMUNIT_OCCURENCES, "TSC is not fixed, it may run at variable speed.");
782 STAM_REG(pVM, &pVM->tm.s.StatTSCNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/NotTicking", STAMUNIT_OCCURENCES, "TSC is not ticking.");
783 STAM_REG(pVM, &pVM->tm.s.StatTSCSyncNotTicking, STAMTYPE_COUNTER, "/TM/TSC/Intercept/SyncNotTicking", STAMUNIT_OCCURENCES, "VirtualSync isn't ticking.");
784 STAM_REG(pVM, &pVM->tm.s.StatTSCWarp, STAMTYPE_COUNTER, "/TM/TSC/Intercept/Warp", STAMUNIT_OCCURENCES, "Warpdrive is active.");
785 STAM_REG(pVM, &pVM->tm.s.StatTSCSet, STAMTYPE_COUNTER, "/TM/TSC/Sets", STAMUNIT_OCCURENCES, "Calls to TMCpuTickSet.");
786 STAM_REG(pVM, &pVM->tm.s.StatTSCUnderflow, STAMTYPE_COUNTER, "/TM/TSC/Underflow", STAMUNIT_OCCURENCES, "TSC underflow; corrected with last seen value .");
787 STAM_REG(pVM, &pVM->tm.s.StatVirtualPause, STAMTYPE_COUNTER, "/TM/TSC/Pause", STAMUNIT_OCCURENCES, "The number of times the TSC was paused.");
788 STAM_REG(pVM, &pVM->tm.s.StatVirtualResume, STAMTYPE_COUNTER, "/TM/TSC/Resume", STAMUNIT_OCCURENCES, "The number of times the TSC was resumed.");
789#endif /* VBOX_WITH_STATISTICS */
790
791 for (VMCPUID i = 0; i < pVM->cCpus; i++)
792 {
793 PVMCPU pVCpu = pVM->apCpusR3[i];
794 STAMR3RegisterF(pVM, &pVCpu->tm.s.offTSCRawSrc, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS, "TSC offset relative the raw source", "/TM/TSC/offCPU%u", i);
795#ifndef VBOX_WITHOUT_NS_ACCOUNTING
796# if defined(VBOX_WITH_STATISTICS) || defined(VBOX_WITH_NS_ACCOUNTING_STATS)
797 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsTotal, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Resettable: Total CPU run time.", "/TM/CPU/%02u", i);
798 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsExecuting, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code.", "/TM/CPU/%02u/PrfExecuting", i);
799 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsExecLong, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - long hauls.", "/TM/CPU/%02u/PrfExecLong", i);
800 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsExecShort, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - short stretches.", "/TM/CPU/%02u/PrfExecShort", i);
801 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsExecTiny, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent executing guest code - tiny bits.", "/TM/CPU/%02u/PrfExecTiny", i);
802 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsHalted, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent halted.", "/TM/CPU/%02u/PrfHalted", i);
803 STAMR3RegisterF(pVM, &pVCpu->tm.s.StatNsOther, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_NS_PER_OCCURENCE, "Resettable: Time spent in the VMM or preempted.", "/TM/CPU/%02u/PrfOther", i);
804# endif
805 STAMR3RegisterF(pVM, &pVCpu->tm.s.cNsTotalStat, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Total CPU run time.", "/TM/CPU/%02u/cNsTotal", i);
806 STAMR3RegisterF(pVM, &pVCpu->tm.s.cNsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent executing guest code.", "/TM/CPU/%02u/cNsExecuting", i);
807 STAMR3RegisterF(pVM, &pVCpu->tm.s.cNsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent halted.", "/TM/CPU/%02u/cNsHalted", i);
808 STAMR3RegisterF(pVM, &pVCpu->tm.s.cNsOtherStat, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Time spent in the VMM or preempted.", "/TM/CPU/%02u/cNsOther", i);
809 STAMR3RegisterF(pVM, &pVCpu->tm.s.cPeriodsExecuting, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times executed guest code.", "/TM/CPU/%02u/cPeriodsExecuting", i);
810 STAMR3RegisterF(pVM, &pVCpu->tm.s.cPeriodsHalted, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_COUNT, "Times halted.", "/TM/CPU/%02u/cPeriodsHalted", i);
811 STAMR3RegisterF(pVM, &pVCpu->tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/%02u/pctExecuting", i);
812 STAMR3RegisterF(pVM, &pVCpu->tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/%02u/pctHalted", i);
813 STAMR3RegisterF(pVM, &pVCpu->tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/%02u/pctOther", i);
814#endif
815 }
816#ifndef VBOX_WITHOUT_NS_ACCOUNTING
817 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctExecuting, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent executing guest code recently.", "/TM/CPU/pctExecuting");
818 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctHalted, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent halted recently.", "/TM/CPU/pctHalted");
819 STAMR3RegisterF(pVM, &pVM->tm.s.CpuLoad.cPctOther, STAMTYPE_U8, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "Time spent in the VMM or preempted recently.", "/TM/CPU/pctOther");
820#endif
821
822#ifdef VBOX_WITH_STATISTICS
823 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncCatchup, STAMTYPE_PROFILE_ADV, "/TM/VirtualSync/CatchUp", STAMUNIT_TICKS_PER_OCCURENCE, "Counting and measuring the times spent catching up.");
824 STAM_REG(pVM, (void *)&pVM->tm.s.fVirtualSyncCatchUp, STAMTYPE_U8, "/TM/VirtualSync/CatchUpActive", STAMUNIT_NONE, "Catch-Up active indicator.");
825 STAM_REG(pVM, (void *)&pVM->tm.s.u32VirtualSyncCatchUpPercentage, STAMTYPE_U32, "/TM/VirtualSync/CatchUpPercentage", STAMUNIT_PCT, "The catch-up percentage. (+100/100 to get clock multiplier)");
826 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncFF, STAMTYPE_PROFILE, "/TM/VirtualSync/FF", STAMUNIT_TICKS_PER_OCCURENCE, "Time spent in TMR3VirtualSyncFF by all but the dedicate timer EMT.");
827 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUp, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUp", STAMUNIT_OCCURENCES, "Times the catch-up was abandoned.");
828 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting, STAMTYPE_COUNTER, "/TM/VirtualSync/GiveUpBeforeStarting",STAMUNIT_OCCURENCES, "Times the catch-up was abandoned before even starting. (Typically debugging++.)");
829 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRun, STAMTYPE_COUNTER, "/TM/VirtualSync/Run", STAMUNIT_OCCURENCES, "Times the virtual sync timer queue was considered.");
830 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunRestart, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Restarts", STAMUNIT_OCCURENCES, "Times the clock was restarted after a run.");
831 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStop, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/Stop", STAMUNIT_OCCURENCES, "Times the clock was stopped when calculating the current time before examining the timers.");
832 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunStoppedAlready, STAMTYPE_COUNTER, "/TM/VirtualSync/Run/StoppedAlready", STAMUNIT_OCCURENCES, "Times the clock was already stopped elsewhere (TMVirtualSyncGet).");
833 STAM_REG(pVM, &pVM->tm.s.StatVirtualSyncRunSlack, STAMTYPE_PROFILE, "/TM/VirtualSync/Run/Slack", STAMUNIT_NS_PER_OCCURENCE, "The scheduling slack. (Catch-up handed out when running timers.)");
834 for (unsigned i = 0; i < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods); i++)
835 {
836 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage, STAMTYPE_U32, STAMVISIBILITY_ALWAYS, STAMUNIT_PCT, "The catch-up percentage.", "/TM/VirtualSync/Periods/%u", i);
837 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupAdjust[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times adjusted to this period.", "/TM/VirtualSync/Periods/%u/Adjust", i);
838 STAMR3RegisterF(pVM, &pVM->tm.s.aStatVirtualSyncCatchupInitial[i], STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, "Times started in this period.", "/TM/VirtualSync/Periods/%u/Initial", i);
839 STAMR3RegisterF(pVM, &pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u64Start, STAMTYPE_U64, STAMVISIBILITY_ALWAYS, STAMUNIT_NS, "Start of this period (lag).", "/TM/VirtualSync/Periods/%u/Start", i);
840 }
841#endif /* VBOX_WITH_STATISTICS */
842
843 /*
844 * Register info handlers.
845 */
846 DBGFR3InfoRegisterInternalEx(pVM, "timers", "Dumps all timers. No arguments.", tmR3TimerInfo, DBGFINFO_FLAGS_RUN_ON_EMT);
847 DBGFR3InfoRegisterInternalEx(pVM, "activetimers", "Dumps active all timers. No arguments.", tmR3TimerInfoActive, DBGFINFO_FLAGS_RUN_ON_EMT);
848 DBGFR3InfoRegisterInternalEx(pVM, "clocks", "Display the time of the various clocks.", tmR3InfoClocks, DBGFINFO_FLAGS_RUN_ON_EMT);
849 DBGFR3InfoRegisterInternalArgv(pVM, "cpuload", "Display the CPU load stats (--help for details).", tmR3InfoCpuLoad, 0);
850
851 return VINF_SUCCESS;
852}
853
854
855/**
856 * Checks if the host CPU has a fixed TSC frequency.
857 *
858 * @returns true if it has, false if it hasn't.
859 *
860 * @remarks This test doesn't bother with very old CPUs that don't do power
861 * management or any other stuff that might influence the TSC rate.
862 * This isn't currently relevant.
863 */
864static bool tmR3HasFixedTSC(PVM pVM)
865{
866 /*
867 * ASSUME that if the GIP is in invariant TSC mode, it's because the CPU
868 * actually has invariant TSC.
869 *
870 * In driverless mode we just assume sync TSC for now regardless of what
871 * the case actually is.
872 */
873 PSUPGLOBALINFOPAGE const pGip = g_pSUPGlobalInfoPage;
874 SUPGIPMODE const enmGipMode = pGip ? (SUPGIPMODE)pGip->u32Mode : SUPGIPMODE_INVARIANT_TSC;
875 if (enmGipMode == SUPGIPMODE_INVARIANT_TSC)
876 return true;
877
878 /*
879 * Go by features and model info from the CPUID instruction.
880 */
881 if (ASMHasCpuId())
882 {
883 uint32_t uEAX, uEBX, uECX, uEDX;
884
885 /*
886 * By feature. (Used to be AMD specific, intel seems to have picked it up.)
887 */
888 ASMCpuId(0x80000000, &uEAX, &uEBX, &uECX, &uEDX);
889 if (uEAX >= 0x80000007 && RTX86IsValidExtRange(uEAX))
890 {
891 ASMCpuId(0x80000007, &uEAX, &uEBX, &uECX, &uEDX);
892 if ( (uEDX & X86_CPUID_AMD_ADVPOWER_EDX_TSCINVAR) /* TscInvariant */
893 && enmGipMode != SUPGIPMODE_ASYNC_TSC) /* No fixed tsc if the gip timer is in async mode. */
894 return true;
895 }
896
897 /*
898 * By model.
899 */
900 if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_AMD)
901 {
902 /*
903 * AuthenticAMD - Check for APM support and that TscInvariant is set.
904 *
905 * This test isn't correct with respect to fixed/non-fixed TSC and
906 * older models, but this isn't relevant since the result is currently
907 * only used for making a decision on AMD-V models.
908 */
909#if 0 /* Promoted to generic */
910 ASMCpuId(0x80000000, &uEAX, &uEBX, &uECX, &uEDX);
911 if (uEAX >= 0x80000007)
912 {
913 ASMCpuId(0x80000007, &uEAX, &uEBX, &uECX, &uEDX);
914 if ( (uEDX & X86_CPUID_AMD_ADVPOWER_EDX_TSCINVAR) /* TscInvariant */
915 && ( enmGipMode == SUPGIPMODE_SYNC_TSC /* No fixed tsc if the gip timer is in async mode. */
916 || enmGipMode == SUPGIPMODE_INVARIANT_TSC))
917 return true;
918 }
919#endif
920 }
921 else if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_INTEL)
922 {
923 /*
924 * GenuineIntel - Check the model number.
925 *
926 * This test is lacking in the same way and for the same reasons
927 * as the AMD test above.
928 */
929 /** @todo use RTX86GetCpuFamily() and RTX86GetCpuModel() here. */
930 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
931 unsigned uModel = (uEAX >> 4) & 0x0f;
932 unsigned uFamily = (uEAX >> 8) & 0x0f;
933 if (uFamily == 0x0f)
934 uFamily += (uEAX >> 20) & 0xff;
935 if (uFamily >= 0x06)
936 uModel += ((uEAX >> 16) & 0x0f) << 4;
937 if ( (uFamily == 0x0f /*P4*/ && uModel >= 0x03)
938 || (uFamily == 0x06 /*P2/P3*/ && uModel >= 0x0e))
939 return true;
940 }
941 else if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_VIA)
942 {
943 /*
944 * CentaurHauls - Check the model, family and stepping.
945 *
946 * This only checks for VIA CPU models Nano X2, Nano X3,
947 * Eden X2 and QuadCore.
948 */
949 /** @todo use RTX86GetCpuFamily() and RTX86GetCpuModel() here. */
950 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
951 unsigned uStepping = (uEAX & 0x0f);
952 unsigned uModel = (uEAX >> 4) & 0x0f;
953 unsigned uFamily = (uEAX >> 8) & 0x0f;
954 if ( uFamily == 0x06
955 && uModel == 0x0f
956 && uStepping >= 0x0c
957 && uStepping <= 0x0f)
958 return true;
959 }
960 else if (CPUMGetHostCpuVendor(pVM) == CPUMCPUVENDOR_SHANGHAI)
961 {
962 /*
963 * Shanghai - Check the model, family and stepping.
964 */
965 /** @todo use RTX86GetCpuFamily() and RTX86GetCpuModel() here. */
966 ASMCpuId(1, &uEAX, &uEBX, &uECX, &uEDX);
967 unsigned uFamily = (uEAX >> 8) & 0x0f;
968 if ( uFamily == 0x06
969 || uFamily == 0x07)
970 {
971 return true;
972 }
973 }
974 }
975 return false;
976}
977
978
979/**
980 * Calibrate the CPU tick.
981 *
982 * @returns Number of ticks per second.
983 */
984static uint64_t tmR3CalibrateTSC(void)
985{
986 uint64_t u64Hz;
987
988 /*
989 * Use GIP when available. Prefere the nominal one, no need to wait for it.
990 */
991 PSUPGLOBALINFOPAGE pGip = g_pSUPGlobalInfoPage;
992 if (pGip)
993 {
994 u64Hz = pGip->u64CpuHz;
995 if (u64Hz < _1T && u64Hz > _1M)
996 return u64Hz;
997 AssertFailed(); /* This shouldn't happen. */
998
999 u64Hz = SUPGetCpuHzFromGip(pGip);
1000 if (u64Hz < _1T && u64Hz > _1M)
1001 return u64Hz;
1002
1003 AssertFailed(); /* This shouldn't happen. */
1004 }
1005 else
1006 Assert(SUPR3IsDriverless());
1007
1008 /* Call this once first to make sure it's initialized. */
1009 RTTimeNanoTS();
1010
1011 /*
1012 * Yield the CPU to increase our chances of getting a correct value.
1013 */
1014 RTThreadYield(); /* Try avoid interruptions between TSC and NanoTS samplings. */
1015 static const unsigned s_auSleep[5] = { 50, 30, 30, 40, 40 };
1016 uint64_t au64Samples[5];
1017 unsigned i;
1018 for (i = 0; i < RT_ELEMENTS(au64Samples); i++)
1019 {
1020 RTMSINTERVAL cMillies;
1021 int cTries = 5;
1022 uint64_t u64Start = ASMReadTSC();
1023 uint64_t u64End;
1024 uint64_t StartTS = RTTimeNanoTS();
1025 uint64_t EndTS;
1026 do
1027 {
1028 RTThreadSleep(s_auSleep[i]);
1029 u64End = ASMReadTSC();
1030 EndTS = RTTimeNanoTS();
1031 cMillies = (RTMSINTERVAL)((EndTS - StartTS + 500000) / 1000000);
1032 } while ( cMillies == 0 /* the sleep may be interrupted... */
1033 || (cMillies < 20 && --cTries > 0));
1034 uint64_t u64Diff = u64End - u64Start;
1035
1036 au64Samples[i] = (u64Diff * 1000) / cMillies;
1037 AssertMsg(cTries > 0, ("cMillies=%d i=%d\n", cMillies, i));
1038 }
1039
1040 /*
1041 * Discard the highest and lowest results and calculate the average.
1042 */
1043 unsigned iHigh = 0;
1044 unsigned iLow = 0;
1045 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
1046 {
1047 if (au64Samples[i] < au64Samples[iLow])
1048 iLow = i;
1049 if (au64Samples[i] > au64Samples[iHigh])
1050 iHigh = i;
1051 }
1052 au64Samples[iLow] = 0;
1053 au64Samples[iHigh] = 0;
1054
1055 u64Hz = au64Samples[0];
1056 for (i = 1; i < RT_ELEMENTS(au64Samples); i++)
1057 u64Hz += au64Samples[i];
1058 u64Hz /= RT_ELEMENTS(au64Samples) - 2;
1059
1060 return u64Hz;
1061}
1062
1063
1064/**
1065 * Finalizes the TM initialization.
1066 *
1067 * @returns VBox status code.
1068 * @param pVM The cross context VM structure.
1069 */
1070VMM_INT_DECL(int) TMR3InitFinalize(PVM pVM)
1071{
1072 int rc;
1073
1074 /*
1075 * Resolve symbols, unless we're in driverless mode.
1076 */
1077 if (!SUPR3IsDriverless())
1078 {
1079 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSBad", &pVM->tm.s.VirtualGetRawDataR0.pfnBad);
1080 AssertRCReturn(rc, rc);
1081 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSBadCpuIndex", &pVM->tm.s.VirtualGetRawDataR0.pfnBadCpuIndex);
1082 AssertRCReturn(rc, rc);
1083 rc = PDMR3LdrGetSymbolR0(pVM, NULL, "tmVirtualNanoTSRediscover", &pVM->tm.s.VirtualGetRawDataR0.pfnRediscover);
1084 AssertRCReturn(rc, rc);
1085 pVM->tm.s.pfnVirtualGetRawR0 = pVM->tm.s.VirtualGetRawDataR0.pfnRediscover;
1086 }
1087
1088#ifndef VBOX_WITHOUT_NS_ACCOUNTING
1089 /*
1090 * Create a timer for refreshing the CPU load stats.
1091 */
1092 TMTIMERHANDLE hTimer;
1093 rc = TMR3TimerCreate(pVM, TMCLOCK_REAL, tmR3CpuLoadTimer, NULL, TMTIMER_FLAGS_NO_RING0, "CPU Load Timer", &hTimer);
1094 if (RT_SUCCESS(rc))
1095 rc = TMTimerSetMillies(pVM, hTimer, 1000);
1096#endif
1097
1098 /*
1099 * GIM is now initialized. Determine if TSC mode switching is allowed (respecting CFGM override).
1100 */
1101 pVM->tm.s.fTSCModeSwitchAllowed &= tmR3HasFixedTSC(pVM) && GIMIsEnabled(pVM) && !VM_IS_RAW_MODE_ENABLED(pVM);
1102 LogRel(("TM: TMR3InitFinalize: fTSCModeSwitchAllowed=%RTbool\n", pVM->tm.s.fTSCModeSwitchAllowed));
1103
1104 /*
1105 * Grow the virtual & real timer tables so we've got sufficient
1106 * space for dynamically created timers. We cannot allocate more
1107 * after ring-0 init completes.
1108 */
1109 static struct { uint32_t idxQueue, cExtra; } s_aExtra[] = { {TMCLOCK_VIRTUAL, 128}, {TMCLOCK_REAL, 32} };
1110 for (uint32_t i = 0; i < RT_ELEMENTS(s_aExtra); i++)
1111 {
1112 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[s_aExtra[i].idxQueue];
1113 PDMCritSectRwEnterExcl(pVM, &pQueue->AllocLock, VERR_IGNORED);
1114 if (s_aExtra[i].cExtra > pQueue->cTimersFree)
1115 {
1116 uint32_t cTimersAlloc = pQueue->cTimersAlloc + s_aExtra[i].cExtra - pQueue->cTimersFree;
1117 rc = tmR3TimerQueueGrow(pVM, pQueue, cTimersAlloc);
1118 AssertLogRelMsgReturn(RT_SUCCESS(rc), ("rc=%Rrc cTimersAlloc=%u %s\n", rc, cTimersAlloc, pQueue->szName), rc);
1119 }
1120 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
1121 }
1122
1123#ifdef VBOX_WITH_STATISTICS
1124 /*
1125 * Register timer statistics now that we've fixed the timer table sizes.
1126 */
1127 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
1128 {
1129 pVM->tm.s.aTimerQueues[idxQueue].fCannotGrow = true;
1130 tmR3TimerQueueRegisterStats(pVM, &pVM->tm.s.aTimerQueues[idxQueue], UINT32_MAX);
1131 }
1132#endif
1133
1134 return rc;
1135}
1136
1137
1138/**
1139 * Applies relocations to data and code managed by this
1140 * component. This function will be called at init and
1141 * whenever the VMM need to relocate it self inside the GC.
1142 *
1143 * @param pVM The cross context VM structure.
1144 * @param offDelta Relocation delta relative to old location.
1145 */
1146VMM_INT_DECL(void) TMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
1147{
1148 LogFlow(("TMR3Relocate\n"));
1149 RT_NOREF(pVM, offDelta);
1150}
1151
1152
1153/**
1154 * Terminates the TM.
1155 *
1156 * Termination means cleaning up and freeing all resources,
1157 * the VM it self is at this point powered off or suspended.
1158 *
1159 * @returns VBox status code.
1160 * @param pVM The cross context VM structure.
1161 */
1162VMM_INT_DECL(int) TMR3Term(PVM pVM)
1163{
1164 if (pVM->tm.s.pTimer)
1165 {
1166 int rc = RTTimerDestroy(pVM->tm.s.pTimer);
1167 AssertRC(rc);
1168 pVM->tm.s.pTimer = NULL;
1169 }
1170
1171 return VINF_SUCCESS;
1172}
1173
1174
1175/**
1176 * The VM is being reset.
1177 *
1178 * For the TM component this means that a rescheduling is preformed,
1179 * the FF is cleared and but without running the queues. We'll have to
1180 * check if this makes sense or not, but it seems like a good idea now....
1181 *
1182 * @param pVM The cross context VM structure.
1183 */
1184VMM_INT_DECL(void) TMR3Reset(PVM pVM)
1185{
1186 LogFlow(("TMR3Reset:\n"));
1187 VM_ASSERT_EMT(pVM);
1188
1189 /*
1190 * Abort any pending catch up.
1191 * This isn't perfect...
1192 */
1193 if (pVM->tm.s.fVirtualSyncCatchUp)
1194 {
1195 const uint64_t offVirtualNow = TMVirtualGetNoCheck(pVM);
1196 const uint64_t offVirtualSyncNow = TMVirtualSyncGetNoCheck(pVM);
1197 if (pVM->tm.s.fVirtualSyncCatchUp)
1198 {
1199 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
1200
1201 const uint64_t offOld = pVM->tm.s.offVirtualSyncGivenUp;
1202 const uint64_t offNew = offVirtualNow - offVirtualSyncNow;
1203 Assert(offOld <= offNew);
1204 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
1205 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSync, offNew);
1206 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
1207 LogRel(("TM: Aborting catch-up attempt on reset with a %'RU64 ns lag on reset; new total: %'RU64 ns\n", offNew - offOld, offNew));
1208 }
1209 }
1210
1211 /*
1212 * Process the queues.
1213 */
1214 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
1215 {
1216 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[idxQueue];
1217 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
1218 tmTimerQueueSchedule(pVM, pQueue, pQueue);
1219 PDMCritSectLeave(pVM, &pQueue->TimerLock);
1220 }
1221#ifdef VBOX_STRICT
1222 tmTimerQueuesSanityChecks(pVM, "TMR3Reset");
1223#endif
1224
1225 PVMCPU pVCpuDst = pVM->apCpusR3[pVM->tm.s.idTimerCpu];
1226 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /** @todo FIXME: this isn't right. */
1227
1228 /*
1229 * Switch TM TSC mode back to the original mode after a reset for
1230 * paravirtualized guests that alter the TM TSC mode during operation.
1231 * We're already in an EMT rendezvous at this point.
1232 */
1233 if ( pVM->tm.s.fTSCModeSwitchAllowed
1234 && pVM->tm.s.enmTSCMode != pVM->tm.s.enmOriginalTSCMode)
1235 {
1236 VM_ASSERT_EMT0(pVM);
1237 tmR3CpuTickParavirtDisable(pVM, pVM->apCpusR3[0], NULL /* pvData */);
1238 }
1239 Assert(!GIMIsParavirtTscEnabled(pVM));
1240 pVM->tm.s.fParavirtTscEnabled = false;
1241
1242 /*
1243 * Reset TSC to avoid a Windows 8+ bug (see @bugref{8926}). If Windows
1244 * sees TSC value beyond 0x40000000000 at startup, it will reset the
1245 * TSC on boot-up CPU only, causing confusion and mayhem with SMP.
1246 */
1247 VM_ASSERT_EMT0(pVM);
1248 uint64_t offTscRawSrc;
1249 switch (pVM->tm.s.enmTSCMode)
1250 {
1251 case TMTSCMODE_REAL_TSC_OFFSET:
1252 offTscRawSrc = SUPReadTsc();
1253 break;
1254 case TMTSCMODE_DYNAMIC:
1255 case TMTSCMODE_VIRT_TSC_EMULATED:
1256 offTscRawSrc = TMVirtualSyncGetNoCheck(pVM);
1257 offTscRawSrc = ASMMultU64ByU32DivByU32(offTscRawSrc, pVM->tm.s.cTSCTicksPerSecond, TMCLOCK_FREQ_VIRTUAL);
1258 break;
1259 case TMTSCMODE_NATIVE_API:
1260 /** @todo NEM TSC reset on reset for Windows8+ bug workaround. */
1261 offTscRawSrc = 0;
1262 break;
1263 default:
1264 AssertFailedBreakStmt(offTscRawSrc = 0);
1265 }
1266 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1267 {
1268 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
1269 pVCpu->tm.s.offTSCRawSrc = offTscRawSrc;
1270 pVCpu->tm.s.u64TSC = 0;
1271 pVCpu->tm.s.u64TSCLastSeen = 0;
1272 }
1273}
1274
1275
1276/**
1277 * Execute state save operation.
1278 *
1279 * @returns VBox status code.
1280 * @param pVM The cross context VM structure.
1281 * @param pSSM SSM operation handle.
1282 */
1283static DECLCALLBACK(int) tmR3Save(PVM pVM, PSSMHANDLE pSSM)
1284{
1285 LogFlow(("tmR3Save:\n"));
1286#ifdef VBOX_STRICT
1287 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1288 {
1289 PVMCPU pVCpu = pVM->apCpusR3[i];
1290 Assert(!pVCpu->tm.s.fTSCTicking);
1291 }
1292 Assert(!pVM->tm.s.cVirtualTicking);
1293 Assert(!pVM->tm.s.fVirtualSyncTicking);
1294 Assert(!pVM->tm.s.cTSCsTicking);
1295#endif
1296
1297 /*
1298 * Save the virtual clocks.
1299 */
1300 /* the virtual clock. */
1301 SSMR3PutU64(pSSM, TMCLOCK_FREQ_VIRTUAL);
1302 SSMR3PutU64(pSSM, pVM->tm.s.u64Virtual);
1303
1304 /* the virtual timer synchronous clock. */
1305 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSync);
1306 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSync);
1307 SSMR3PutU64(pSSM, pVM->tm.s.offVirtualSyncGivenUp);
1308 SSMR3PutU64(pSSM, pVM->tm.s.u64VirtualSyncCatchUpPrev);
1309 SSMR3PutBool(pSSM, pVM->tm.s.fVirtualSyncCatchUp);
1310
1311 /* real time clock */
1312 SSMR3PutU64(pSSM, TMCLOCK_FREQ_REAL);
1313
1314 /* the cpu tick clock. */
1315 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1316 {
1317 PVMCPU pVCpu = pVM->apCpusR3[i];
1318 SSMR3PutU64(pSSM, TMCpuTickGet(pVCpu));
1319 }
1320 return SSMR3PutU64(pSSM, pVM->tm.s.cTSCTicksPerSecond);
1321}
1322
1323
1324/**
1325 * Execute state load operation.
1326 *
1327 * @returns VBox status code.
1328 * @param pVM The cross context VM structure.
1329 * @param pSSM SSM operation handle.
1330 * @param uVersion Data layout version.
1331 * @param uPass The data pass.
1332 */
1333static DECLCALLBACK(int) tmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
1334{
1335 LogFlow(("tmR3Load:\n"));
1336
1337 Assert(uPass == SSM_PASS_FINAL); NOREF(uPass);
1338#ifdef VBOX_STRICT
1339 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1340 {
1341 PVMCPU pVCpu = pVM->apCpusR3[i];
1342 Assert(!pVCpu->tm.s.fTSCTicking);
1343 }
1344 Assert(!pVM->tm.s.cVirtualTicking);
1345 Assert(!pVM->tm.s.fVirtualSyncTicking);
1346 Assert(!pVM->tm.s.cTSCsTicking);
1347#endif
1348
1349 /*
1350 * Validate version.
1351 */
1352 if (uVersion != TM_SAVED_STATE_VERSION)
1353 {
1354 AssertMsgFailed(("tmR3Load: Invalid version uVersion=%d!\n", uVersion));
1355 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1356 }
1357
1358 /*
1359 * Load the virtual clock.
1360 */
1361 pVM->tm.s.cVirtualTicking = 0;
1362 /* the virtual clock. */
1363 uint64_t u64Hz;
1364 int rc = SSMR3GetU64(pSSM, &u64Hz);
1365 if (RT_FAILURE(rc))
1366 return rc;
1367 if (u64Hz != TMCLOCK_FREQ_VIRTUAL)
1368 {
1369 AssertMsgFailed(("The virtual clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1370 u64Hz, TMCLOCK_FREQ_VIRTUAL));
1371 return VERR_SSM_VIRTUAL_CLOCK_HZ;
1372 }
1373 SSMR3GetU64(pSSM, &pVM->tm.s.u64Virtual);
1374 pVM->tm.s.u64VirtualOffset = 0;
1375
1376 /* the virtual timer synchronous clock. */
1377 pVM->tm.s.fVirtualSyncTicking = false;
1378 uint64_t u64;
1379 SSMR3GetU64(pSSM, &u64);
1380 pVM->tm.s.u64VirtualSync = u64;
1381 SSMR3GetU64(pSSM, &u64);
1382 pVM->tm.s.offVirtualSync = u64;
1383 SSMR3GetU64(pSSM, &u64);
1384 pVM->tm.s.offVirtualSyncGivenUp = u64;
1385 SSMR3GetU64(pSSM, &u64);
1386 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64;
1387 bool f;
1388 SSMR3GetBool(pSSM, &f);
1389 pVM->tm.s.fVirtualSyncCatchUp = f;
1390
1391 /* the real clock */
1392 rc = SSMR3GetU64(pSSM, &u64Hz);
1393 if (RT_FAILURE(rc))
1394 return rc;
1395 if (u64Hz != TMCLOCK_FREQ_REAL)
1396 {
1397 AssertMsgFailed(("The real clock frequency differs! Saved: %'RU64 Binary: %'RU64\n",
1398 u64Hz, TMCLOCK_FREQ_REAL));
1399 return VERR_SSM_VIRTUAL_CLOCK_HZ; /* misleading... */
1400 }
1401
1402 /* the cpu tick clock. */
1403 pVM->tm.s.cTSCsTicking = 0;
1404 pVM->tm.s.offTSCPause = 0;
1405 pVM->tm.s.u64LastPausedTSC = 0;
1406 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1407 {
1408 PVMCPU pVCpu = pVM->apCpusR3[i];
1409
1410 pVCpu->tm.s.fTSCTicking = false;
1411 SSMR3GetU64(pSSM, &pVCpu->tm.s.u64TSC);
1412 if (pVM->tm.s.u64LastPausedTSC < pVCpu->tm.s.u64TSC)
1413 pVM->tm.s.u64LastPausedTSC = pVCpu->tm.s.u64TSC;
1414
1415 if (pVM->tm.s.enmTSCMode == TMTSCMODE_REAL_TSC_OFFSET)
1416 pVCpu->tm.s.offTSCRawSrc = 0; /** @todo TSC restore stuff and HWACC. */
1417 }
1418
1419 rc = SSMR3GetU64(pSSM, &u64Hz);
1420 if (RT_FAILURE(rc))
1421 return rc;
1422 if (pVM->tm.s.enmTSCMode != TMTSCMODE_REAL_TSC_OFFSET)
1423 pVM->tm.s.cTSCTicksPerSecond = u64Hz;
1424
1425 LogRel(("TM: cTSCTicksPerSecond=%#RX64 (%'RU64) enmTSCMode=%d (%s) (state load)\n",
1426 pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.cTSCTicksPerSecond, pVM->tm.s.enmTSCMode, tmR3GetTSCModeName(pVM)));
1427
1428 /* Disabled as this isn't tested, also should this apply only if GIM is enabled etc. */
1429#if 0
1430 /*
1431 * If the current host TSC frequency is incompatible with what is in the
1432 * saved state of the VM, fall back to emulating TSC and disallow TSC mode
1433 * switches during VM runtime (e.g. by GIM).
1434 */
1435 if ( GIMIsEnabled(pVM)
1436 || pVM->tm.s.enmTSCMode == TMTSCMODE_REAL_TSC_OFFSET)
1437 {
1438 uint64_t uGipCpuHz;
1439 bool fRelax = RTSystemIsInsideVM();
1440 bool fCompat = SUPIsTscFreqCompatible(pVM->tm.s.cTSCTicksPerSecond, &uGipCpuHz, fRelax);
1441 if (!fCompat)
1442 {
1443 pVM->tm.s.enmTSCMode = TMTSCMODE_VIRT_TSC_EMULATED;
1444 pVM->tm.s.fTSCModeSwitchAllowed = false;
1445 if (g_pSUPGlobalInfoPage->u32Mode != SUPGIPMODE_ASYNC_TSC)
1446 {
1447 LogRel(("TM: TSC frequency incompatible! uGipCpuHz=%#RX64 (%'RU64) enmTSCMode=%d (%s) fTSCModeSwitchAllowed=%RTbool (state load)\n",
1448 uGipCpuHz, uGipCpuHz, pVM->tm.s.enmTSCMode, tmR3GetTSCModeName(pVM), pVM->tm.s.fTSCModeSwitchAllowed));
1449 }
1450 else
1451 {
1452 LogRel(("TM: GIP is async, enmTSCMode=%d (%s) fTSCModeSwitchAllowed=%RTbool (state load)\n",
1453 uGipCpuHz, uGipCpuHz, pVM->tm.s.enmTSCMode, tmR3GetTSCModeName(pVM), pVM->tm.s.fTSCModeSwitchAllowed));
1454 }
1455 }
1456 }
1457#endif
1458
1459 /*
1460 * Make sure timers get rescheduled immediately.
1461 */
1462 PVMCPU pVCpuDst = pVM->apCpusR3[pVM->tm.s.idTimerCpu];
1463 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
1464
1465 return VINF_SUCCESS;
1466}
1467
1468#ifdef VBOX_WITH_STATISTICS
1469
1470/**
1471 * Register statistics for a timer.
1472 *
1473 * @param pVM The cross context VM structure.
1474 * @param pQueue The queue the timer belongs to.
1475 * @param pTimer The timer to register statistics for.
1476 */
1477static void tmR3TimerRegisterStats(PVM pVM, PTMTIMERQUEUE pQueue, PTMTIMER pTimer)
1478{
1479 STAMR3RegisterF(pVM, &pTimer->StatTimer, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL,
1480 pQueue->szName, "/TM/Timers/%s", pTimer->szName);
1481 STAMR3RegisterF(pVM, &pTimer->StatCritSectEnter, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL,
1482 "", "/TM/Timers/%s/CritSectEnter", pTimer->szName);
1483 STAMR3RegisterF(pVM, &pTimer->StatGet, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_CALLS,
1484 "", "/TM/Timers/%s/Get", pTimer->szName);
1485 STAMR3RegisterF(pVM, &pTimer->StatSetAbsolute, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_CALLS,
1486 "", "/TM/Timers/%s/SetAbsolute", pTimer->szName);
1487 STAMR3RegisterF(pVM, &pTimer->StatSetRelative, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_CALLS,
1488 "", "/TM/Timers/%s/SetRelative", pTimer->szName);
1489 STAMR3RegisterF(pVM, &pTimer->StatStop, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_CALLS,
1490 "", "/TM/Timers/%s/Stop", pTimer->szName);
1491}
1492
1493
1494/**
1495 * Deregister the statistics for a timer.
1496 */
1497static void tmR3TimerDeregisterStats(PVM pVM, PTMTIMER pTimer)
1498{
1499 char szPrefix[128];
1500 size_t cchPrefix = RTStrPrintf(szPrefix, sizeof(szPrefix), "/TM/Timers/%s/", pTimer->szName);
1501 STAMR3DeregisterByPrefix(pVM->pUVM, szPrefix);
1502 szPrefix[cchPrefix - 1] = '\0';
1503 STAMR3Deregister(pVM->pUVM, szPrefix);
1504}
1505
1506
1507/**
1508 * Register statistics for all allocated timers in a queue.
1509 *
1510 * @param pVM The cross context VM structure.
1511 * @param pQueue The queue to register statistics for.
1512 * @param cTimers Number of timers to consider (in growth scenario).
1513 */
1514static void tmR3TimerQueueRegisterStats(PVM pVM, PTMTIMERQUEUE pQueue, uint32_t cTimers)
1515{
1516 uint32_t idxTimer = RT_MIN(cTimers, pQueue->cTimersAlloc);
1517 while (idxTimer-- > 0)
1518 {
1519 PTMTIMER pTimer = &pQueue->paTimers[idxTimer];
1520 TMTIMERSTATE enmState = pTimer->enmState;
1521 if (enmState > TMTIMERSTATE_INVALID && enmState < TMTIMERSTATE_DESTROY)
1522 tmR3TimerRegisterStats(pVM, pQueue, pTimer);
1523 }
1524}
1525
1526#endif /* VBOX_WITH_STATISTICS */
1527
1528
1529/**
1530 * Grows a timer queue.
1531 *
1532 * @returns VBox status code (errors are LogRel'ed already).
1533 * @param pVM The cross context VM structure.
1534 * @param pQueue The timer queue to grow.
1535 * @param cNewTimers The minimum number of timers after growing.
1536 * @note Caller owns the queue's allocation lock.
1537 */
1538static int tmR3TimerQueueGrow(PVM pVM, PTMTIMERQUEUE pQueue, uint32_t cNewTimers)
1539{
1540 /*
1541 * Validate input and state.
1542 */
1543 VM_ASSERT_EMT0_RETURN(pVM, VERR_VM_THREAD_NOT_EMT);
1544 VM_ASSERT_STATE_RETURN(pVM, VMSTATE_CREATING, VERR_VM_INVALID_VM_STATE); /** @todo must do better than this! */
1545 AssertReturn(!pQueue->fCannotGrow, VERR_TM_TIMER_QUEUE_CANNOT_GROW);
1546
1547 uint32_t const cOldEntries = pQueue->cTimersAlloc;
1548 AssertReturn(cNewTimers > cOldEntries, VERR_TM_IPE_1);
1549 AssertReturn(cNewTimers < _32K, VERR_TM_IPE_1);
1550
1551 /*
1552 * Do the growing.
1553 */
1554 int rc;
1555 if (!SUPR3IsDriverless())
1556 {
1557 rc = VMMR3CallR0Emt(pVM, VMMGetCpu(pVM), VMMR0_DO_TM_GROW_TIMER_QUEUE,
1558 RT_MAKE_U64(cNewTimers, (uint64_t)(pQueue - &pVM->tm.s.aTimerQueues[0])), NULL);
1559 AssertLogRelRCReturn(rc, rc);
1560 AssertReturn(pQueue->cTimersAlloc >= cNewTimers, VERR_TM_IPE_3);
1561 }
1562 else
1563 {
1564 AssertReturn(cNewTimers <= _32K && cOldEntries <= _32K, VERR_TM_TOO_MANY_TIMERS);
1565 ASMCompilerBarrier();
1566
1567 /*
1568 * Round up the request to the nearest page and do the allocation.
1569 */
1570 size_t cbNew = sizeof(TMTIMER) * cNewTimers;
1571 cbNew = RT_ALIGN_Z(cbNew, HOST_PAGE_SIZE);
1572 cNewTimers = (uint32_t)(cbNew / sizeof(TMTIMER));
1573
1574 PTMTIMER paTimers = (PTMTIMER)RTMemPageAllocZ(cbNew);
1575 if (paTimers)
1576 {
1577 /*
1578 * Copy over the old timer, init the new free ones, then switch over
1579 * and free the old ones.
1580 */
1581 PTMTIMER const paOldTimers = pQueue->paTimers;
1582 tmHCTimerQueueGrowInit(paTimers, paOldTimers, cNewTimers, cOldEntries);
1583
1584 pQueue->paTimers = paTimers;
1585 pQueue->cTimersAlloc = cNewTimers;
1586 pQueue->cTimersFree += cNewTimers - (cOldEntries ? cOldEntries : 1);
1587
1588 RTMemPageFree(paOldTimers, RT_ALIGN_Z(sizeof(TMTIMER) * cOldEntries, HOST_PAGE_SIZE));
1589 rc = VINF_SUCCESS;
1590 }
1591 else
1592 rc = VERR_NO_PAGE_MEMORY;
1593 }
1594 return rc;
1595}
1596
1597
1598/**
1599 * Internal TMR3TimerCreate worker.
1600 *
1601 * @returns VBox status code.
1602 * @param pVM The cross context VM structure.
1603 * @param enmClock The timer clock.
1604 * @param fFlags TMTIMER_FLAGS_XXX.
1605 * @param pszName The timer name.
1606 * @param ppTimer Where to store the timer pointer on success.
1607 */
1608static int tmr3TimerCreate(PVM pVM, TMCLOCK enmClock, uint32_t fFlags, const char *pszName, PPTMTIMERR3 ppTimer)
1609{
1610 PTMTIMER pTimer;
1611
1612 /*
1613 * Validate input.
1614 */
1615 VM_ASSERT_EMT(pVM);
1616
1617 AssertReturn((fFlags & (TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0)) != (TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0),
1618 VERR_INVALID_FLAGS);
1619
1620 AssertPtrReturn(pszName, VERR_INVALID_POINTER);
1621 size_t const cchName = strlen(pszName);
1622 AssertMsgReturn(cchName < sizeof(pTimer->szName), ("timer name too long: %s\n", pszName), VERR_INVALID_NAME);
1623 AssertMsgReturn(cchName > 2, ("Too short timer name: %s\n", pszName), VERR_INVALID_NAME);
1624
1625 AssertMsgReturn(enmClock >= TMCLOCK_REAL && enmClock < TMCLOCK_MAX,
1626 ("%d\n", enmClock), VERR_INVALID_PARAMETER);
1627 AssertReturn(enmClock != TMCLOCK_TSC, VERR_NOT_SUPPORTED);
1628 if (enmClock == TMCLOCK_VIRTUAL_SYNC)
1629 VM_ASSERT_STATE_RETURN(pVM, VMSTATE_CREATING, VERR_WRONG_ORDER);
1630
1631 /*
1632 * Exclusively lock the queue.
1633 *
1634 * Note! This means that it is not possible to allocate timers from a timer callback.
1635 */
1636 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[enmClock];
1637 int rc = PDMCritSectRwEnterExcl(pVM, &pQueue->AllocLock, VERR_IGNORED);
1638 AssertRCReturn(rc, rc);
1639
1640 /*
1641 * Allocate the timer.
1642 */
1643 if (!pQueue->cTimersFree)
1644 {
1645 rc = tmR3TimerQueueGrow(pVM, pQueue, pQueue->cTimersAlloc + 64);
1646 AssertRCReturnStmt(rc, PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock), rc);
1647 }
1648
1649 /* Scan the array for free timers. */
1650 pTimer = NULL;
1651 PTMTIMER const paTimers = pQueue->paTimers;
1652 uint32_t const cTimersAlloc = pQueue->cTimersAlloc;
1653 uint32_t idxTimer = pQueue->idxFreeHint;
1654 for (uint32_t iScan = 0; iScan < 2; iScan++)
1655 {
1656 while (idxTimer < cTimersAlloc)
1657 {
1658 if (paTimers[idxTimer].enmState == TMTIMERSTATE_FREE)
1659 {
1660 pTimer = &paTimers[idxTimer];
1661 pQueue->idxFreeHint = idxTimer + 1;
1662 break;
1663 }
1664 idxTimer++;
1665 }
1666 if (pTimer != NULL)
1667 break;
1668 idxTimer = 1;
1669 }
1670 AssertLogRelMsgReturnStmt(pTimer != NULL, ("cTimersFree=%u cTimersAlloc=%u enmClock=%s\n", pQueue->cTimersFree,
1671 pQueue->cTimersAlloc, pQueue->szName),
1672 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock), VERR_INTERNAL_ERROR_3);
1673 pQueue->cTimersFree -= 1;
1674
1675 /*
1676 * Initialize it.
1677 */
1678 Assert(idxTimer != 0);
1679 Assert(idxTimer <= TMTIMERHANDLE_TIMER_IDX_MASK);
1680 pTimer->hSelf = idxTimer
1681 | ((uintptr_t)(pQueue - &pVM->tm.s.aTimerQueues[0]) << TMTIMERHANDLE_QUEUE_IDX_SHIFT);
1682 Assert(!(pTimer->hSelf & TMTIMERHANDLE_RANDOM_MASK));
1683 pTimer->hSelf |= (RTRandU64() & TMTIMERHANDLE_RANDOM_MASK);
1684
1685 pTimer->u64Expire = 0;
1686 pTimer->enmState = TMTIMERSTATE_STOPPED;
1687 pTimer->idxScheduleNext = UINT32_MAX;
1688 pTimer->idxNext = UINT32_MAX;
1689 pTimer->idxPrev = UINT32_MAX;
1690 pTimer->fFlags = fFlags;
1691 pTimer->uHzHint = 0;
1692 pTimer->pvUser = NULL;
1693 pTimer->pCritSect = NULL;
1694 memcpy(pTimer->szName, pszName, cchName);
1695 pTimer->szName[cchName] = '\0';
1696
1697#ifdef VBOX_STRICT
1698 tmTimerQueuesSanityChecks(pVM, "tmR3TimerCreate");
1699#endif
1700
1701 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
1702
1703#ifdef VBOX_WITH_STATISTICS
1704 /*
1705 * Only register statistics if we're passed the no-realloc point.
1706 */
1707 if (pQueue->fCannotGrow)
1708 tmR3TimerRegisterStats(pVM, pQueue, pTimer);
1709#endif
1710
1711 *ppTimer = pTimer;
1712 return VINF_SUCCESS;
1713}
1714
1715
1716/**
1717 * Creates a device timer.
1718 *
1719 * @returns VBox status code.
1720 * @param pVM The cross context VM structure.
1721 * @param pDevIns Device instance.
1722 * @param enmClock The clock to use on this timer.
1723 * @param pfnCallback Callback function.
1724 * @param pvUser The user argument to the callback.
1725 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1726 * @param pszName Timer name (will be copied). Max 31 chars.
1727 * @param phTimer Where to store the timer handle on success.
1728 */
1729VMM_INT_DECL(int) TMR3TimerCreateDevice(PVM pVM, PPDMDEVINS pDevIns, TMCLOCK enmClock,
1730 PFNTMTIMERDEV pfnCallback, void *pvUser,
1731 uint32_t fFlags, const char *pszName, PTMTIMERHANDLE phTimer)
1732{
1733 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT | TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0)),
1734 VERR_INVALID_FLAGS);
1735
1736 /*
1737 * Allocate and init stuff.
1738 */
1739 PTMTIMER pTimer;
1740 int rc = tmr3TimerCreate(pVM, enmClock, fFlags, pszName, &pTimer);
1741 if (RT_SUCCESS(rc))
1742 {
1743 pTimer->enmType = TMTIMERTYPE_DEV;
1744 pTimer->u.Dev.pfnTimer = pfnCallback;
1745 pTimer->u.Dev.pDevIns = pDevIns;
1746 pTimer->pvUser = pvUser;
1747 if (!(fFlags & TMTIMER_FLAGS_NO_CRIT_SECT))
1748 pTimer->pCritSect = PDMR3DevGetCritSect(pVM, pDevIns);
1749 *phTimer = pTimer->hSelf;
1750 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", phTimer, enmClock, pfnCallback, pszName));
1751 }
1752
1753 return rc;
1754}
1755
1756
1757
1758
1759/**
1760 * Creates a USB device timer.
1761 *
1762 * @returns VBox status code.
1763 * @param pVM The cross context VM structure.
1764 * @param pUsbIns The USB device instance.
1765 * @param enmClock The clock to use on this timer.
1766 * @param pfnCallback Callback function.
1767 * @param pvUser The user argument to the callback.
1768 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1769 * @param pszName Timer name (will be copied). Max 31 chars.
1770 * @param phTimer Where to store the timer handle on success.
1771 */
1772VMM_INT_DECL(int) TMR3TimerCreateUsb(PVM pVM, PPDMUSBINS pUsbIns, TMCLOCK enmClock,
1773 PFNTMTIMERUSB pfnCallback, void *pvUser,
1774 uint32_t fFlags, const char *pszName, PTMTIMERHANDLE phTimer)
1775{
1776 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT | TMTIMER_FLAGS_NO_RING0)), VERR_INVALID_PARAMETER);
1777
1778 /*
1779 * Allocate and init stuff.
1780 */
1781 PTMTIMER pTimer;
1782 int rc = tmr3TimerCreate(pVM, enmClock, fFlags, pszName, &pTimer);
1783 if (RT_SUCCESS(rc))
1784 {
1785 pTimer->enmType = TMTIMERTYPE_USB;
1786 pTimer->u.Usb.pfnTimer = pfnCallback;
1787 pTimer->u.Usb.pUsbIns = pUsbIns;
1788 pTimer->pvUser = pvUser;
1789 //if (!(fFlags & TMTIMER_FLAGS_NO_CRIT_SECT))
1790 //{
1791 // if (pDevIns->pCritSectR3)
1792 // pTimer->pCritSect = pUsbIns->pCritSectR3;
1793 // else
1794 // pTimer->pCritSect = IOMR3GetCritSect(pVM);
1795 //}
1796 *phTimer = pTimer->hSelf;
1797 Log(("TM: Created USB device timer %p clock %d callback %p '%s'\n", *phTimer, enmClock, pfnCallback, pszName));
1798 }
1799
1800 return rc;
1801}
1802
1803
1804/**
1805 * Creates a driver timer.
1806 *
1807 * @returns VBox status code.
1808 * @param pVM The cross context VM structure.
1809 * @param pDrvIns Driver instance.
1810 * @param enmClock The clock to use on this timer.
1811 * @param pfnCallback Callback function.
1812 * @param pvUser The user argument to the callback.
1813 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1814 * @param pszName Timer name (will be copied). Max 31 chars.
1815 * @param phTimer Where to store the timer handle on success.
1816 */
1817VMM_INT_DECL(int) TMR3TimerCreateDriver(PVM pVM, PPDMDRVINS pDrvIns, TMCLOCK enmClock, PFNTMTIMERDRV pfnCallback, void *pvUser,
1818 uint32_t fFlags, const char *pszName, PTMTIMERHANDLE phTimer)
1819{
1820 AssertReturn(!(fFlags & ~(TMTIMER_FLAGS_NO_CRIT_SECT | TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0)),
1821 VERR_INVALID_FLAGS);
1822
1823 /*
1824 * Allocate and init stuff.
1825 */
1826 PTMTIMER pTimer;
1827 int rc = tmr3TimerCreate(pVM, enmClock, fFlags, pszName, &pTimer);
1828 if (RT_SUCCESS(rc))
1829 {
1830 pTimer->enmType = TMTIMERTYPE_DRV;
1831 pTimer->u.Drv.pfnTimer = pfnCallback;
1832 pTimer->u.Drv.pDrvIns = pDrvIns;
1833 pTimer->pvUser = pvUser;
1834 *phTimer = pTimer->hSelf;
1835 Log(("TM: Created device timer %p clock %d callback %p '%s'\n", *phTimer, enmClock, pfnCallback, pszName));
1836 }
1837
1838 return rc;
1839}
1840
1841
1842/**
1843 * Creates an internal timer.
1844 *
1845 * @returns VBox status code.
1846 * @param pVM The cross context VM structure.
1847 * @param enmClock The clock to use on this timer.
1848 * @param pfnCallback Callback function.
1849 * @param pvUser User argument to be passed to the callback.
1850 * @param fFlags Timer creation flags, see grp_tm_timer_flags.
1851 * @param pszName Timer name (will be copied). Max 31 chars.
1852 * @param phTimer Where to store the timer handle on success.
1853 */
1854VMMR3DECL(int) TMR3TimerCreate(PVM pVM, TMCLOCK enmClock, PFNTMTIMERINT pfnCallback, void *pvUser,
1855 uint32_t fFlags, const char *pszName, PTMTIMERHANDLE phTimer)
1856{
1857 AssertReturn(fFlags & (TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0), VERR_INVALID_FLAGS);
1858 AssertReturn((fFlags & (TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0)) != (TMTIMER_FLAGS_RING0 | TMTIMER_FLAGS_NO_RING0),
1859 VERR_INVALID_FLAGS);
1860
1861 /*
1862 * Allocate and init stuff.
1863 */
1864 PTMTIMER pTimer;
1865 int rc = tmr3TimerCreate(pVM, enmClock, fFlags, pszName, &pTimer);
1866 if (RT_SUCCESS(rc))
1867 {
1868 pTimer->enmType = TMTIMERTYPE_INTERNAL;
1869 pTimer->u.Internal.pfnTimer = pfnCallback;
1870 pTimer->pvUser = pvUser;
1871 *phTimer = pTimer->hSelf;
1872 Log(("TM: Created internal timer %p clock %d callback %p '%s'\n", pTimer, enmClock, pfnCallback, pszName));
1873 }
1874
1875 return rc;
1876}
1877
1878
1879/**
1880 * Destroy a timer
1881 *
1882 * @returns VBox status code.
1883 * @param pVM The cross context VM structure.
1884 * @param pQueue The queue the timer is on.
1885 * @param pTimer Timer handle as returned by one of the create functions.
1886 */
1887static int tmR3TimerDestroy(PVMCC pVM, PTMTIMERQUEUE pQueue, PTMTIMER pTimer)
1888{
1889 bool fActive = false;
1890 bool fPending = false;
1891
1892 AssertMsg( !pTimer->pCritSect
1893 || VMR3GetState(pVM) != VMSTATE_RUNNING
1894 || PDMCritSectIsOwner(pVM, pTimer->pCritSect), ("%s\n", pTimer->szName));
1895
1896 /*
1897 * The rest of the game happens behind the lock, just
1898 * like create does. All the work is done here.
1899 */
1900 PDMCritSectRwEnterExcl(pVM, &pQueue->AllocLock, VERR_IGNORED);
1901 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
1902
1903 for (int cRetries = 1000;; cRetries--)
1904 {
1905 /*
1906 * Change to the DESTROY state.
1907 */
1908 TMTIMERSTATE const enmState = pTimer->enmState;
1909 Log2(("TMTimerDestroy: %p:{.enmState=%s, .szName='%s'} cRetries=%d\n",
1910 pTimer, tmTimerState(enmState), pTimer->szName, cRetries));
1911 switch (enmState)
1912 {
1913 case TMTIMERSTATE_STOPPED:
1914 case TMTIMERSTATE_EXPIRED_DELIVER:
1915 break;
1916
1917 case TMTIMERSTATE_ACTIVE:
1918 fActive = true;
1919 break;
1920
1921 case TMTIMERSTATE_PENDING_STOP:
1922 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
1923 case TMTIMERSTATE_PENDING_RESCHEDULE:
1924 fActive = true;
1925 fPending = true;
1926 break;
1927
1928 case TMTIMERSTATE_PENDING_SCHEDULE:
1929 fPending = true;
1930 break;
1931
1932 /*
1933 * This shouldn't happen as the caller should make sure there are no races.
1934 */
1935 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
1936 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
1937 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
1938 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->szName));
1939 PDMCritSectLeave(pVM, &pQueue->TimerLock);
1940 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
1941
1942 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->szName),
1943 VERR_TM_UNSTABLE_STATE);
1944 if (!RTThreadYield())
1945 RTThreadSleep(1);
1946
1947 PDMCritSectRwEnterExcl(pVM, &pQueue->AllocLock, VERR_IGNORED);
1948 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
1949 continue;
1950
1951 /*
1952 * Invalid states.
1953 */
1954 case TMTIMERSTATE_FREE:
1955 case TMTIMERSTATE_DESTROY:
1956 PDMCritSectLeave(pVM, &pQueue->TimerLock);
1957 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
1958 AssertLogRelMsgFailedReturn(("pTimer=%p %s\n", pTimer, tmTimerState(enmState)), VERR_TM_INVALID_STATE);
1959
1960 default:
1961 AssertMsgFailed(("Unknown timer state %d (%s)\n", enmState, pTimer->szName));
1962 PDMCritSectLeave(pVM, &pQueue->TimerLock);
1963 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
1964 return VERR_TM_UNKNOWN_STATE;
1965 }
1966
1967 /*
1968 * Try switch to the destroy state.
1969 * This should always succeed as the caller should make sure there are no race.
1970 */
1971 bool fRc;
1972 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_DESTROY, enmState, fRc);
1973 if (fRc)
1974 break;
1975 AssertMsgFailed(("%p:.enmState=%s %s\n", pTimer, tmTimerState(enmState), pTimer->szName));
1976 PDMCritSectLeave(pVM, &pQueue->TimerLock);
1977 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
1978
1979 AssertMsgReturn(cRetries > 0, ("Failed waiting for stable state. state=%d (%s)\n", pTimer->enmState, pTimer->szName),
1980 VERR_TM_UNSTABLE_STATE);
1981
1982 PDMCritSectRwEnterExcl(pVM, &pQueue->AllocLock, VERR_IGNORED);
1983 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
1984 }
1985
1986 /*
1987 * Unlink from the active list.
1988 */
1989 if (fActive)
1990 {
1991 const PTMTIMER pPrev = tmTimerGetPrev(pQueue, pTimer);
1992 const PTMTIMER pNext = tmTimerGetNext(pQueue, pTimer);
1993 if (pPrev)
1994 tmTimerSetNext(pQueue, pPrev, pNext);
1995 else
1996 {
1997 tmTimerQueueSetHead(pQueue, pQueue, pNext);
1998 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
1999 }
2000 if (pNext)
2001 tmTimerSetPrev(pQueue, pNext, pPrev);
2002 pTimer->idxNext = UINT32_MAX;
2003 pTimer->idxPrev = UINT32_MAX;
2004 }
2005
2006 /*
2007 * Unlink from the schedule list by running it.
2008 */
2009 if (fPending)
2010 {
2011 Log3(("TMR3TimerDestroy: tmTimerQueueSchedule\n"));
2012 STAM_PROFILE_START(&pVM->tm.s.CTX_SUFF_Z(StatScheduleOne), a);
2013 Assert(pQueue->idxSchedule < pQueue->cTimersAlloc);
2014 tmTimerQueueSchedule(pVM, pQueue, pQueue);
2015 STAM_PROFILE_STOP(&pVM->tm.s.CTX_SUFF_Z(StatScheduleOne), a);
2016 }
2017
2018#ifdef VBOX_WITH_STATISTICS
2019 /*
2020 * Deregister statistics.
2021 */
2022 tmR3TimerDeregisterStats(pVM, pTimer);
2023#endif
2024
2025 /*
2026 * Change it to free state and update the queue accordingly.
2027 */
2028 Assert(pTimer->idxNext == UINT32_MAX); Assert(pTimer->idxPrev == UINT32_MAX); Assert(pTimer->idxScheduleNext == UINT32_MAX);
2029
2030 TM_SET_STATE(pTimer, TMTIMERSTATE_FREE);
2031
2032 pQueue->cTimersFree += 1;
2033 uint32_t idxTimer = (uint32_t)(pTimer - pQueue->paTimers);
2034 if (idxTimer < pQueue->idxFreeHint)
2035 pQueue->idxFreeHint = idxTimer;
2036
2037#ifdef VBOX_STRICT
2038 tmTimerQueuesSanityChecks(pVM, "TMR3TimerDestroy");
2039#endif
2040 PDMCritSectLeave(pVM, &pQueue->TimerLock);
2041 PDMCritSectRwLeaveExcl(pVM, &pQueue->AllocLock);
2042 return VINF_SUCCESS;
2043}
2044
2045
2046/**
2047 * Destroy a timer
2048 *
2049 * @returns VBox status code.
2050 * @param pVM The cross context VM structure.
2051 * @param hTimer Timer handle as returned by one of the create functions.
2052 */
2053VMMR3DECL(int) TMR3TimerDestroy(PVM pVM, TMTIMERHANDLE hTimer)
2054{
2055 /* We ignore NILs here. */
2056 if (hTimer == NIL_TMTIMERHANDLE)
2057 return VINF_SUCCESS;
2058 TMTIMER_HANDLE_TO_VARS_RETURN(pVM, hTimer); /* => pTimer, pQueueCC, pQueue, idxTimer, idxQueue */
2059 return tmR3TimerDestroy(pVM, pQueue, pTimer);
2060}
2061
2062
2063/**
2064 * Destroy all timers owned by a device.
2065 *
2066 * @returns VBox status code.
2067 * @param pVM The cross context VM structure.
2068 * @param pDevIns Device which timers should be destroyed.
2069 */
2070VMM_INT_DECL(int) TMR3TimerDestroyDevice(PVM pVM, PPDMDEVINS pDevIns)
2071{
2072 LogFlow(("TMR3TimerDestroyDevice: pDevIns=%p\n", pDevIns));
2073 if (!pDevIns)
2074 return VERR_INVALID_PARAMETER;
2075
2076 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
2077 {
2078 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[idxQueue];
2079 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
2080 uint32_t idxTimer = pQueue->cTimersAlloc;
2081 while (idxTimer-- > 0)
2082 {
2083 PTMTIMER pTimer = &pQueue->paTimers[idxTimer];
2084 if ( pTimer->enmType == TMTIMERTYPE_DEV
2085 && pTimer->u.Dev.pDevIns == pDevIns
2086 && pTimer->enmState < TMTIMERSTATE_DESTROY)
2087 {
2088 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
2089
2090 int rc = tmR3TimerDestroy(pVM, pQueue, pTimer);
2091 AssertRC(rc);
2092
2093 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
2094 }
2095 }
2096 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
2097 }
2098
2099 LogFlow(("TMR3TimerDestroyDevice: returns VINF_SUCCESS\n"));
2100 return VINF_SUCCESS;
2101}
2102
2103
2104/**
2105 * Destroy all timers owned by a USB device.
2106 *
2107 * @returns VBox status code.
2108 * @param pVM The cross context VM structure.
2109 * @param pUsbIns USB device which timers should be destroyed.
2110 */
2111VMM_INT_DECL(int) TMR3TimerDestroyUsb(PVM pVM, PPDMUSBINS pUsbIns)
2112{
2113 LogFlow(("TMR3TimerDestroyUsb: pUsbIns=%p\n", pUsbIns));
2114 if (!pUsbIns)
2115 return VERR_INVALID_PARAMETER;
2116
2117 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
2118 {
2119 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[idxQueue];
2120 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
2121 uint32_t idxTimer = pQueue->cTimersAlloc;
2122 while (idxTimer-- > 0)
2123 {
2124 PTMTIMER pTimer = &pQueue->paTimers[idxTimer];
2125 if ( pTimer->enmType == TMTIMERTYPE_USB
2126 && pTimer->u.Usb.pUsbIns == pUsbIns
2127 && pTimer->enmState < TMTIMERSTATE_DESTROY)
2128 {
2129 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
2130
2131 int rc = tmR3TimerDestroy(pVM, pQueue, pTimer);
2132 AssertRC(rc);
2133
2134 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
2135 }
2136 }
2137 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
2138 }
2139
2140 LogFlow(("TMR3TimerDestroyUsb: returns VINF_SUCCESS\n"));
2141 return VINF_SUCCESS;
2142}
2143
2144
2145/**
2146 * Destroy all timers owned by a driver.
2147 *
2148 * @returns VBox status code.
2149 * @param pVM The cross context VM structure.
2150 * @param pDrvIns Driver which timers should be destroyed.
2151 */
2152VMM_INT_DECL(int) TMR3TimerDestroyDriver(PVM pVM, PPDMDRVINS pDrvIns)
2153{
2154 LogFlow(("TMR3TimerDestroyDriver: pDrvIns=%p\n", pDrvIns));
2155 if (!pDrvIns)
2156 return VERR_INVALID_PARAMETER;
2157
2158 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
2159 {
2160 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[idxQueue];
2161 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
2162 uint32_t idxTimer = pQueue->cTimersAlloc;
2163 while (idxTimer-- > 0)
2164 {
2165 PTMTIMER pTimer = &pQueue->paTimers[idxTimer];
2166 if ( pTimer->enmType == TMTIMERTYPE_DRV
2167 && pTimer->u.Drv.pDrvIns == pDrvIns
2168 && pTimer->enmState < TMTIMERSTATE_DESTROY)
2169 {
2170 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
2171
2172 int rc = tmR3TimerDestroy(pVM, pQueue, pTimer);
2173 AssertRC(rc);
2174
2175 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
2176 }
2177 }
2178 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
2179 }
2180
2181 LogFlow(("TMR3TimerDestroyDriver: returns VINF_SUCCESS\n"));
2182 return VINF_SUCCESS;
2183}
2184
2185
2186/**
2187 * Internal function for getting the clock time.
2188 *
2189 * @returns clock time.
2190 * @param pVM The cross context VM structure.
2191 * @param enmClock The clock.
2192 */
2193DECLINLINE(uint64_t) tmClock(PVM pVM, TMCLOCK enmClock)
2194{
2195 switch (enmClock)
2196 {
2197 case TMCLOCK_VIRTUAL: return TMVirtualGet(pVM);
2198 case TMCLOCK_VIRTUAL_SYNC: return TMVirtualSyncGet(pVM);
2199 case TMCLOCK_REAL: return TMRealGet(pVM);
2200 case TMCLOCK_TSC: return TMCpuTickGet(pVM->apCpusR3[0] /* just take VCPU 0 */);
2201 default:
2202 AssertMsgFailed(("enmClock=%d\n", enmClock));
2203 return ~(uint64_t)0;
2204 }
2205}
2206
2207
2208/**
2209 * Checks if the sync queue has one or more expired timers.
2210 *
2211 * @returns true / false.
2212 *
2213 * @param pVM The cross context VM structure.
2214 * @param enmClock The queue.
2215 */
2216DECLINLINE(bool) tmR3HasExpiredTimer(PVM pVM, TMCLOCK enmClock)
2217{
2218 const uint64_t u64Expire = pVM->tm.s.aTimerQueues[enmClock].u64Expire;
2219 return u64Expire != INT64_MAX && u64Expire <= tmClock(pVM, enmClock);
2220}
2221
2222
2223/**
2224 * Checks for expired timers in all the queues.
2225 *
2226 * @returns true / false.
2227 * @param pVM The cross context VM structure.
2228 */
2229DECLINLINE(bool) tmR3AnyExpiredTimers(PVM pVM)
2230{
2231 /*
2232 * Combine the time calculation for the first two since we're not on EMT
2233 * TMVirtualSyncGet only permits EMT.
2234 */
2235 uint64_t u64Now = TMVirtualGetNoCheck(pVM);
2236 if (pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL].u64Expire <= u64Now)
2237 return true;
2238 u64Now = pVM->tm.s.fVirtualSyncTicking
2239 ? u64Now - pVM->tm.s.offVirtualSync
2240 : pVM->tm.s.u64VirtualSync;
2241 if (pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC].u64Expire <= u64Now)
2242 return true;
2243
2244 /*
2245 * The remaining timers.
2246 */
2247 if (tmR3HasExpiredTimer(pVM, TMCLOCK_REAL))
2248 return true;
2249 if (tmR3HasExpiredTimer(pVM, TMCLOCK_TSC))
2250 return true;
2251 return false;
2252}
2253
2254
2255/**
2256 * Schedule timer callback.
2257 *
2258 * @param pTimer Timer handle.
2259 * @param pvUser Pointer to the VM.
2260 * @thread Timer thread.
2261 *
2262 * @remark We cannot do the scheduling and queues running from a timer handler
2263 * since it's not executing in EMT, and even if it was it would be async
2264 * and we wouldn't know the state of the affairs.
2265 * So, we'll just raise the timer FF and force any REM execution to exit.
2266 */
2267static DECLCALLBACK(void) tmR3TimerCallback(PRTTIMER pTimer, void *pvUser, uint64_t /*iTick*/)
2268{
2269 PVM pVM = (PVM)pvUser;
2270 PVMCPU pVCpuDst = pVM->apCpusR3[pVM->tm.s.idTimerCpu];
2271 NOREF(pTimer);
2272
2273 AssertCompile(TMCLOCK_MAX == 4);
2274 STAM_COUNTER_INC(&pVM->tm.s.StatTimerCallback);
2275
2276#ifdef DEBUG_Sander /* very annoying, keep it private. */
2277 if (VMCPU_FF_IS_SET(pVCpuDst, VMCPU_FF_TIMER))
2278 Log(("tmR3TimerCallback: timer event still pending!!\n"));
2279#endif
2280 if ( !VMCPU_FF_IS_SET(pVCpuDst, VMCPU_FF_TIMER)
2281 && ( pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC].idxSchedule != UINT32_MAX /** @todo FIXME - reconsider offSchedule as a reason for running the timer queues. */
2282 || pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL].idxSchedule != UINT32_MAX
2283 || pVM->tm.s.aTimerQueues[TMCLOCK_REAL].idxSchedule != UINT32_MAX
2284 || pVM->tm.s.aTimerQueues[TMCLOCK_TSC].idxSchedule != UINT32_MAX
2285 || tmR3AnyExpiredTimers(pVM)
2286 )
2287 && !VMCPU_FF_IS_SET(pVCpuDst, VMCPU_FF_TIMER)
2288 && !pVM->tm.s.fRunningQueues
2289 )
2290 {
2291 Log5(("TM(%u): FF: 0 -> 1\n", __LINE__));
2292 VMCPU_FF_SET(pVCpuDst, VMCPU_FF_TIMER);
2293 VMR3NotifyCpuFFU(pVCpuDst->pUVCpu, VMNOTIFYFF_FLAGS_DONE_REM | VMNOTIFYFF_FLAGS_POKE);
2294 STAM_COUNTER_INC(&pVM->tm.s.StatTimerCallbackSetFF);
2295 }
2296}
2297
2298
2299/**
2300 * Worker for tmR3TimerQueueDoOne that runs pending timers on the specified
2301 * non-empty timer queue.
2302 *
2303 * @param pVM The cross context VM structure.
2304 * @param pQueue The queue to run.
2305 * @param pTimer The head timer. Caller already check that this is
2306 * not NULL.
2307 */
2308static void tmR3TimerQueueRun(PVM pVM, PTMTIMERQUEUE pQueue, PTMTIMER pTimer)
2309{
2310 VM_ASSERT_EMT(pVM); /** @todo relax this */
2311
2312 /*
2313 * Run timers.
2314 *
2315 * We check the clock once and run all timers which are ACTIVE
2316 * and have an expire time less or equal to the time we read.
2317 *
2318 * N.B. A generic unlink must be applied since other threads
2319 * are allowed to mess with any active timer at any time.
2320 *
2321 * However, we only allow EMT to handle EXPIRED_PENDING
2322 * timers, thus enabling the timer handler function to
2323 * arm the timer again.
2324 */
2325/** @todo the above 'however' is outdated. */
2326 const uint64_t u64Now = tmClock(pVM, pQueue->enmClock);
2327 while (pTimer->u64Expire <= u64Now)
2328 {
2329 PTMTIMER const pNext = tmTimerGetNext(pQueue, pTimer);
2330 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2331 if (pCritSect)
2332 {
2333 STAM_PROFILE_START(&pTimer->StatCritSectEnter, Locking);
2334 PDMCritSectEnter(pVM, pCritSect, VERR_IGNORED);
2335 STAM_PROFILE_STOP(&pTimer->StatCritSectEnter, Locking);
2336 }
2337 Log2(("tmR3TimerQueueRun: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .szName='%s'}\n",
2338 pTimer, tmTimerState(pTimer->enmState), pQueue->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->szName));
2339 bool fRc;
2340 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_GET_UNLINK, TMTIMERSTATE_ACTIVE, fRc);
2341 if (fRc)
2342 {
2343 Assert(pTimer->idxScheduleNext == UINT32_MAX); /* this can trigger falsely */
2344
2345 /* unlink */
2346 const PTMTIMER pPrev = tmTimerGetPrev(pQueue, pTimer);
2347 if (pPrev)
2348 tmTimerSetNext(pQueue, pPrev, pNext);
2349 else
2350 {
2351 tmTimerQueueSetHead(pQueue, pQueue, pNext);
2352 pQueue->u64Expire = pNext ? pNext->u64Expire : INT64_MAX;
2353 }
2354 if (pNext)
2355 tmTimerSetPrev(pQueue, pNext, pPrev);
2356 pTimer->idxNext = UINT32_MAX;
2357 pTimer->idxPrev = UINT32_MAX;
2358
2359 /* fire */
2360 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2361 STAM_PROFILE_START(&pTimer->StatTimer, PrfTimer);
2362 switch (pTimer->enmType)
2363 {
2364 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer->hSelf, pTimer->pvUser); break;
2365 case TMTIMERTYPE_USB: pTimer->u.Usb.pfnTimer(pTimer->u.Usb.pUsbIns, pTimer->hSelf, pTimer->pvUser); break;
2366 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer->hSelf, pTimer->pvUser); break;
2367 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer->hSelf, pTimer->pvUser); break;
2368 default:
2369 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->szName));
2370 break;
2371 }
2372 STAM_PROFILE_STOP(&pTimer->StatTimer, PrfTimer);
2373
2374 /* change the state if it wasn't changed already in the handler. */
2375 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2376 Log2(("tmR3TimerQueueRun: new state %s\n", tmTimerState(pTimer->enmState)));
2377 }
2378 if (pCritSect)
2379 PDMCritSectLeave(pVM, pCritSect);
2380
2381 /* Advance? */
2382 pTimer = pNext;
2383 if (!pTimer)
2384 break;
2385 } /* run loop */
2386}
2387
2388
2389/**
2390 * Service one regular timer queue.
2391 *
2392 * @param pVM The cross context VM structure.
2393 * @param pQueue The queue.
2394 */
2395static void tmR3TimerQueueDoOne(PVM pVM, PTMTIMERQUEUE pQueue)
2396{
2397 Assert(pQueue->enmClock != TMCLOCK_VIRTUAL_SYNC);
2398
2399 /*
2400 * Only one thread should be "doing" the queue.
2401 */
2402 if (ASMAtomicCmpXchgBool(&pQueue->fBeingProcessed, true, false))
2403 {
2404 STAM_PROFILE_START(&pQueue->StatDo, s);
2405 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
2406
2407 if (pQueue->idxSchedule != UINT32_MAX)
2408 tmTimerQueueSchedule(pVM, pQueue, pQueue);
2409
2410 PTMTIMER pHead = tmTimerQueueGetHead(pQueue, pQueue);
2411 if (pHead)
2412 tmR3TimerQueueRun(pVM, pQueue, pHead);
2413
2414 PDMCritSectLeave(pVM, &pQueue->TimerLock);
2415 STAM_PROFILE_STOP(&pQueue->StatDo, s);
2416 ASMAtomicWriteBool(&pQueue->fBeingProcessed, false);
2417 }
2418}
2419
2420
2421/**
2422 * Schedules and runs any pending times in the timer queue for the
2423 * synchronous virtual clock.
2424 *
2425 * This scheduling is a bit different from the other queues as it need
2426 * to implement the special requirements of the timer synchronous virtual
2427 * clock, thus this 2nd queue run function.
2428 *
2429 * @param pVM The cross context VM structure.
2430 *
2431 * @remarks The caller must the Virtual Sync lock. Owning the TM lock is no
2432 * longer important.
2433 */
2434static void tmR3TimerQueueRunVirtualSync(PVM pVM)
2435{
2436 PTMTIMERQUEUE const pQueue = &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC];
2437 VM_ASSERT_EMT(pVM);
2438 Assert(PDMCritSectIsOwner(pVM, &pVM->tm.s.VirtualSyncLock));
2439
2440 /*
2441 * Any timers?
2442 */
2443 PTMTIMER pNext = tmTimerQueueGetHead(pQueue, pQueue);
2444 if (RT_UNLIKELY(!pNext))
2445 {
2446 Assert(pVM->tm.s.fVirtualSyncTicking || !pVM->tm.s.cVirtualTicking);
2447 return;
2448 }
2449 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRun);
2450
2451 /*
2452 * Calculate the time frame for which we will dispatch timers.
2453 *
2454 * We use a time frame ranging from the current sync time (which is most likely the
2455 * same as the head timer) and some configurable period (100000ns) up towards the
2456 * current virtual time. This period might also need to be restricted by the catch-up
2457 * rate so frequent calls to this function won't accelerate the time too much, however
2458 * this will be implemented at a later point if necessary.
2459 *
2460 * Without this frame we would 1) having to run timers much more frequently
2461 * and 2) lag behind at a steady rate.
2462 */
2463 const uint64_t u64VirtualNow = TMVirtualGetNoCheck(pVM);
2464 uint64_t const offSyncGivenUp = pVM->tm.s.offVirtualSyncGivenUp;
2465 uint64_t u64Now;
2466 if (!pVM->tm.s.fVirtualSyncTicking)
2467 {
2468 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStoppedAlready);
2469 u64Now = pVM->tm.s.u64VirtualSync;
2470 Assert(u64Now <= pNext->u64Expire);
2471 }
2472 else
2473 {
2474 /* Calc 'now'. */
2475 bool fStopCatchup = false;
2476 bool fUpdateStuff = false;
2477 uint64_t off = pVM->tm.s.offVirtualSync;
2478 if (pVM->tm.s.fVirtualSyncCatchUp)
2479 {
2480 uint64_t u64Delta = u64VirtualNow - pVM->tm.s.u64VirtualSyncCatchUpPrev;
2481 if (RT_LIKELY(!(u64Delta >> 32)))
2482 {
2483 uint64_t u64Sub = ASMMultU64ByU32DivByU32(u64Delta, pVM->tm.s.u32VirtualSyncCatchUpPercentage, 100);
2484 if (off > u64Sub + offSyncGivenUp)
2485 {
2486 off -= u64Sub;
2487 Log4(("TM: %'RU64/-%'8RU64: sub %'RU64 [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow - off, off - offSyncGivenUp, u64Sub));
2488 }
2489 else
2490 {
2491 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2492 fStopCatchup = true;
2493 off = offSyncGivenUp;
2494 }
2495 fUpdateStuff = true;
2496 }
2497 }
2498 u64Now = u64VirtualNow - off;
2499
2500 /* Adjust against last returned time. */
2501 uint64_t u64Last = ASMAtomicUoReadU64(&pVM->tm.s.u64VirtualSync);
2502 if (u64Last > u64Now)
2503 {
2504 u64Now = u64Last + 1;
2505 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGetAdjLast);
2506 }
2507
2508 /* Check if stopped by expired timer. */
2509 uint64_t const u64Expire = pNext->u64Expire;
2510 if (u64Now >= u64Expire)
2511 {
2512 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunStop);
2513 u64Now = u64Expire;
2514 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2515 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2516 Log4(("TM: %'RU64/-%'8RU64: exp tmr [tmR3TimerQueueRunVirtualSync]\n", u64Now, u64VirtualNow - u64Now - offSyncGivenUp));
2517 }
2518 else
2519 {
2520 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2521 if (fUpdateStuff)
2522 {
2523 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, off);
2524 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSyncCatchUpPrev, u64VirtualNow);
2525 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, u64Now);
2526 if (fStopCatchup)
2527 {
2528 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2529 Log4(("TM: %'RU64/0: caught up [tmR3TimerQueueRunVirtualSync]\n", u64VirtualNow));
2530 }
2531 }
2532 }
2533 }
2534
2535 /* calc end of frame. */
2536 uint64_t u64Max = u64Now + pVM->tm.s.u32VirtualSyncScheduleSlack;
2537 if (u64Max > u64VirtualNow - offSyncGivenUp)
2538 u64Max = u64VirtualNow - offSyncGivenUp;
2539
2540 /* assert sanity */
2541 Assert(u64Now <= u64VirtualNow - offSyncGivenUp);
2542 Assert(u64Max <= u64VirtualNow - offSyncGivenUp);
2543 Assert(u64Now <= u64Max);
2544 Assert(offSyncGivenUp == pVM->tm.s.offVirtualSyncGivenUp);
2545
2546 /*
2547 * Process the expired timers moving the clock along as we progress.
2548 */
2549#ifdef VBOX_STRICT
2550 uint64_t u64Prev = u64Now; NOREF(u64Prev);
2551#endif
2552 while (pNext && pNext->u64Expire <= u64Max)
2553 {
2554 /* Advance */
2555 PTMTIMER pTimer = pNext;
2556 pNext = tmTimerGetNext(pQueue, pTimer);
2557
2558 /* Take the associated lock. */
2559 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2560 if (pCritSect)
2561 {
2562 STAM_PROFILE_START(&pTimer->StatCritSectEnter, Locking);
2563 PDMCritSectEnter(pVM, pCritSect, VERR_IGNORED);
2564 STAM_PROFILE_STOP(&pTimer->StatCritSectEnter, Locking);
2565 }
2566
2567 Log2(("tmR3TimerQueueRunVirtualSync: %p:{.enmState=%s, .enmClock=%d, .enmType=%d, u64Expire=%llx (now=%llx) .szName='%s'}\n",
2568 pTimer, tmTimerState(pTimer->enmState), pQueue->enmClock, pTimer->enmType, pTimer->u64Expire, u64Now, pTimer->szName));
2569
2570 /* Advance the clock - don't permit timers to be out of order or armed
2571 in the 'past'. */
2572#ifdef VBOX_STRICT
2573 AssertMsg(pTimer->u64Expire >= u64Prev, ("%'RU64 < %'RU64 %s\n", pTimer->u64Expire, u64Prev, pTimer->szName));
2574 u64Prev = pTimer->u64Expire;
2575#endif
2576 ASMAtomicWriteU64(&pVM->tm.s.u64VirtualSync, pTimer->u64Expire);
2577 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, false);
2578
2579 /* Unlink it, change the state and do the callout. */
2580 tmTimerQueueUnlinkActive(pVM, pQueue, pQueue, pTimer);
2581 TM_SET_STATE(pTimer, TMTIMERSTATE_EXPIRED_DELIVER);
2582 STAM_PROFILE_START(&pTimer->StatTimer, PrfTimer);
2583 switch (pTimer->enmType)
2584 {
2585 case TMTIMERTYPE_DEV: pTimer->u.Dev.pfnTimer(pTimer->u.Dev.pDevIns, pTimer->hSelf, pTimer->pvUser); break;
2586 case TMTIMERTYPE_USB: pTimer->u.Usb.pfnTimer(pTimer->u.Usb.pUsbIns, pTimer->hSelf, pTimer->pvUser); break;
2587 case TMTIMERTYPE_DRV: pTimer->u.Drv.pfnTimer(pTimer->u.Drv.pDrvIns, pTimer->hSelf, pTimer->pvUser); break;
2588 case TMTIMERTYPE_INTERNAL: pTimer->u.Internal.pfnTimer(pVM, pTimer->hSelf, pTimer->pvUser); break;
2589 default:
2590 AssertMsgFailed(("Invalid timer type %d (%s)\n", pTimer->enmType, pTimer->szName));
2591 break;
2592 }
2593 STAM_PROFILE_STOP(&pTimer->StatTimer, PrfTimer);
2594
2595 /* Change the state if it wasn't changed already in the handler.
2596 Reset the Hz hint too since this is the same as TMTimerStop. */
2597 bool fRc;
2598 TM_TRY_SET_STATE(pTimer, TMTIMERSTATE_STOPPED, TMTIMERSTATE_EXPIRED_DELIVER, fRc);
2599 if (fRc && pTimer->uHzHint)
2600 {
2601 if (pTimer->uHzHint >= pQueue->uMaxHzHint)
2602 ASMAtomicOrU64(&pVM->tm.s.HzHint.u64Combined, RT_BIT_32(TMCLOCK_VIRTUAL_SYNC) | RT_BIT_32(TMCLOCK_VIRTUAL_SYNC + 16));
2603 pTimer->uHzHint = 0;
2604 }
2605 Log2(("tmR3TimerQueueRunVirtualSync: new state %s\n", tmTimerState(pTimer->enmState)));
2606
2607 /* Leave the associated lock. */
2608 if (pCritSect)
2609 PDMCritSectLeave(pVM, pCritSect);
2610 } /* run loop */
2611
2612
2613 /*
2614 * Restart the clock if it was stopped to serve any timers,
2615 * and start/adjust catch-up if necessary.
2616 */
2617 if ( !pVM->tm.s.fVirtualSyncTicking
2618 && pVM->tm.s.cVirtualTicking)
2619 {
2620 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncRunRestart);
2621
2622 /* calc the slack we've handed out. */
2623 const uint64_t u64VirtualNow2 = TMVirtualGetNoCheck(pVM);
2624 Assert(u64VirtualNow2 >= u64VirtualNow);
2625 AssertMsg(pVM->tm.s.u64VirtualSync >= u64Now, ("%'RU64 < %'RU64\n", pVM->tm.s.u64VirtualSync, u64Now));
2626 const uint64_t offSlack = pVM->tm.s.u64VirtualSync - u64Now;
2627 STAM_STATS({
2628 if (offSlack)
2629 {
2630 PSTAMPROFILE p = &pVM->tm.s.StatVirtualSyncRunSlack;
2631 p->cPeriods++;
2632 p->cTicks += offSlack;
2633 if (p->cTicksMax < offSlack) p->cTicksMax = offSlack;
2634 if (p->cTicksMin > offSlack) p->cTicksMin = offSlack;
2635 }
2636 });
2637
2638 /* Let the time run a little bit while we were busy running timers(?). */
2639 uint64_t u64Elapsed;
2640#define MAX_ELAPSED 30000U /* ns */
2641 if (offSlack > MAX_ELAPSED)
2642 u64Elapsed = 0;
2643 else
2644 {
2645 u64Elapsed = u64VirtualNow2 - u64VirtualNow;
2646 if (u64Elapsed > MAX_ELAPSED)
2647 u64Elapsed = MAX_ELAPSED;
2648 u64Elapsed = u64Elapsed > offSlack ? u64Elapsed - offSlack : 0;
2649 }
2650#undef MAX_ELAPSED
2651
2652 /* Calc the current offset. */
2653 uint64_t offNew = u64VirtualNow2 - pVM->tm.s.u64VirtualSync - u64Elapsed;
2654 Assert(!(offNew & RT_BIT_64(63)));
2655 uint64_t offLag = offNew - pVM->tm.s.offVirtualSyncGivenUp;
2656 Assert(!(offLag & RT_BIT_64(63)));
2657
2658 /*
2659 * Deal with starting, adjusting and stopping catchup.
2660 */
2661 if (pVM->tm.s.fVirtualSyncCatchUp)
2662 {
2663 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpStopThreshold)
2664 {
2665 /* stop */
2666 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2667 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2668 Log4(("TM: %'RU64/-%'8RU64: caught up [pt]\n", u64VirtualNow2 - offNew, offLag));
2669 }
2670 else if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2671 {
2672 /* adjust */
2673 unsigned i = 0;
2674 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2675 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2676 i++;
2677 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage < pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage)
2678 {
2679 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupAdjust[i]);
2680 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2681 Log4(("TM: %'RU64/%'8RU64: adj %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2682 }
2683 pVM->tm.s.u64VirtualSyncCatchUpPrev = u64VirtualNow2;
2684 }
2685 else
2686 {
2687 /* give up */
2688 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUp);
2689 STAM_PROFILE_ADV_STOP(&pVM->tm.s.StatVirtualSyncCatchup, c);
2690 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2691 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, false);
2692 Log4(("TM: %'RU64/%'8RU64: give up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2693 LogRel(("TM: Giving up catch-up attempt at a %'RU64 ns lag; new total: %'RU64 ns\n", offLag, offNew));
2694 }
2695 }
2696 else if (offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[0].u64Start)
2697 {
2698 if (offLag <= pVM->tm.s.u64VirtualSyncCatchUpGiveUpThreshold)
2699 {
2700 /* start */
2701 STAM_PROFILE_ADV_START(&pVM->tm.s.StatVirtualSyncCatchup, c);
2702 unsigned i = 0;
2703 while ( i + 1 < RT_ELEMENTS(pVM->tm.s.aVirtualSyncCatchUpPeriods)
2704 && offLag >= pVM->tm.s.aVirtualSyncCatchUpPeriods[i + 1].u64Start)
2705 i++;
2706 STAM_COUNTER_INC(&pVM->tm.s.aStatVirtualSyncCatchupInitial[i]);
2707 ASMAtomicWriteU32(&pVM->tm.s.u32VirtualSyncCatchUpPercentage, pVM->tm.s.aVirtualSyncCatchUpPeriods[i].u32Percentage);
2708 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncCatchUp, true);
2709 Log4(("TM: %'RU64/%'8RU64: catch-up %u%%\n", u64VirtualNow2 - offNew, offLag, pVM->tm.s.u32VirtualSyncCatchUpPercentage));
2710 }
2711 else
2712 {
2713 /* don't bother */
2714 STAM_COUNTER_INC(&pVM->tm.s.StatVirtualSyncGiveUpBeforeStarting);
2715 ASMAtomicWriteU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp, offNew);
2716 Log4(("TM: %'RU64/%'8RU64: give up\n", u64VirtualNow2 - offNew, offLag));
2717 LogRel(("TM: Not bothering to attempt catching up a %'RU64 ns lag; new total: %'RU64\n", offLag, offNew));
2718 }
2719 }
2720
2721 /*
2722 * Update the offset and restart the clock.
2723 */
2724 Assert(!(offNew & RT_BIT_64(63)));
2725 ASMAtomicWriteU64(&pVM->tm.s.offVirtualSync, offNew);
2726 ASMAtomicWriteBool(&pVM->tm.s.fVirtualSyncTicking, true);
2727 }
2728}
2729
2730
2731/**
2732 * Deals with stopped Virtual Sync clock.
2733 *
2734 * This is called by the forced action flag handling code in EM when it
2735 * encounters the VM_FF_TM_VIRTUAL_SYNC flag. It is called by all VCPUs and they
2736 * will block on the VirtualSyncLock until the pending timers has been executed
2737 * and the clock restarted.
2738 *
2739 * @param pVM The cross context VM structure.
2740 * @param pVCpu The cross context virtual CPU structure of the calling EMT.
2741 *
2742 * @thread EMTs
2743 */
2744VMMR3_INT_DECL(void) TMR3VirtualSyncFF(PVM pVM, PVMCPU pVCpu)
2745{
2746 Log2(("TMR3VirtualSyncFF:\n"));
2747
2748 /*
2749 * The EMT doing the timers is diverted to them.
2750 */
2751 if (pVCpu->idCpu == pVM->tm.s.idTimerCpu)
2752 TMR3TimerQueuesDo(pVM);
2753 /*
2754 * The other EMTs will block on the virtual sync lock and the first owner
2755 * will run the queue and thus restarting the clock.
2756 *
2757 * Note! This is very suboptimal code wrt to resuming execution when there
2758 * are more than two Virtual CPUs, since they will all have to enter
2759 * the critical section one by one. But it's a very simple solution
2760 * which will have to do the job for now.
2761 */
2762 else
2763 {
2764/** @todo Optimize for SMP */
2765 STAM_PROFILE_START(&pVM->tm.s.StatVirtualSyncFF, a);
2766 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED);
2767 if (pVM->tm.s.fVirtualSyncTicking)
2768 {
2769 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2770 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
2771 Log2(("TMR3VirtualSyncFF: ticking\n"));
2772 }
2773 else
2774 {
2775 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
2776
2777 /* try run it. */
2778 PDMCritSectEnter(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL].TimerLock, VERR_IGNORED);
2779 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED);
2780 if (pVM->tm.s.fVirtualSyncTicking)
2781 Log2(("TMR3VirtualSyncFF: ticking (2)\n"));
2782 else
2783 {
2784 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
2785 Log2(("TMR3VirtualSyncFF: running queue\n"));
2786
2787 Assert(pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC].idxSchedule == UINT32_MAX);
2788 tmR3TimerQueueRunVirtualSync(pVM);
2789 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
2790 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
2791
2792 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
2793 }
2794 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
2795 STAM_PROFILE_STOP(&pVM->tm.s.StatVirtualSyncFF, a); /* before the unlock! */
2796 PDMCritSectLeave(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL].TimerLock);
2797 }
2798 }
2799}
2800
2801
2802/**
2803 * Service the special virtual sync timer queue.
2804 *
2805 * @param pVM The cross context VM structure.
2806 * @param pVCpuDst The destination VCpu.
2807 */
2808static void tmR3TimerQueueDoVirtualSync(PVM pVM, PVMCPU pVCpuDst)
2809{
2810 PTMTIMERQUEUE pQueue = &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL_SYNC];
2811 if (ASMAtomicCmpXchgBool(&pQueue->fBeingProcessed, true, false))
2812 {
2813 STAM_PROFILE_START(&pQueue->StatDo, s1);
2814 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
2815 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED);
2816 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, true);
2817 VMCPU_FF_CLEAR(pVCpuDst, VMCPU_FF_TIMER); /* Clear the FF once we started working for real. */
2818
2819 Assert(pQueue->idxSchedule == UINT32_MAX);
2820 tmR3TimerQueueRunVirtualSync(pVM);
2821 if (pVM->tm.s.fVirtualSyncTicking) /** @todo move into tmR3TimerQueueRunVirtualSync - FIXME */
2822 VM_FF_CLEAR(pVM, VM_FF_TM_VIRTUAL_SYNC);
2823
2824 ASMAtomicWriteBool(&pVM->tm.s.fRunningVirtualSyncQueue, false);
2825 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
2826 PDMCritSectLeave(pVM, &pQueue->TimerLock);
2827 STAM_PROFILE_STOP(&pQueue->StatDo, s1);
2828 ASMAtomicWriteBool(&pQueue->fBeingProcessed, false);
2829 }
2830}
2831
2832
2833/**
2834 * Schedules and runs any pending timers.
2835 *
2836 * This is normally called from a forced action handler in EMT.
2837 *
2838 * @param pVM The cross context VM structure.
2839 *
2840 * @thread EMT (actually EMT0, but we fend off the others)
2841 */
2842VMMR3DECL(void) TMR3TimerQueuesDo(PVM pVM)
2843{
2844 /*
2845 * Only the dedicated timer EMT should do stuff here.
2846 * (fRunningQueues is only used as an indicator.)
2847 */
2848 Assert(pVM->tm.s.idTimerCpu < pVM->cCpus);
2849 PVMCPU pVCpuDst = pVM->apCpusR3[pVM->tm.s.idTimerCpu];
2850 if (VMMGetCpu(pVM) != pVCpuDst)
2851 {
2852 Assert(pVM->cCpus > 1);
2853 return;
2854 }
2855 STAM_PROFILE_START(&pVM->tm.s.StatDoQueues, a);
2856 Log2(("TMR3TimerQueuesDo:\n"));
2857 Assert(!pVM->tm.s.fRunningQueues);
2858 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, true);
2859
2860 /*
2861 * Process the queues.
2862 */
2863 AssertCompile(TMCLOCK_MAX == 4);
2864
2865 /*
2866 * TMCLOCK_VIRTUAL_SYNC (see also TMR3VirtualSyncFF)
2867 */
2868 tmR3TimerQueueDoVirtualSync(pVM, pVCpuDst);
2869
2870 /*
2871 * TMCLOCK_VIRTUAL
2872 */
2873 tmR3TimerQueueDoOne(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_VIRTUAL]);
2874
2875 /*
2876 * TMCLOCK_TSC
2877 */
2878 Assert(pVM->tm.s.aTimerQueues[TMCLOCK_TSC].idxActive == UINT32_MAX); /* not used */
2879
2880 /*
2881 * TMCLOCK_REAL
2882 */
2883 tmR3TimerQueueDoOne(pVM, &pVM->tm.s.aTimerQueues[TMCLOCK_REAL]);
2884
2885#ifdef VBOX_STRICT
2886 /* check that we didn't screw up. */
2887 tmTimerQueuesSanityChecks(pVM, "TMR3TimerQueuesDo");
2888#endif
2889
2890 /* done */
2891 Log2(("TMR3TimerQueuesDo: returns void\n"));
2892 ASMAtomicWriteBool(&pVM->tm.s.fRunningQueues, false);
2893 STAM_PROFILE_STOP(&pVM->tm.s.StatDoQueues, a);
2894}
2895
2896
2897
2898/** @name Saved state values
2899 * @{ */
2900#define TMTIMERSTATE_SAVED_PENDING_STOP 4
2901#define TMTIMERSTATE_SAVED_PENDING_SCHEDULE 7
2902/** @} */
2903
2904
2905/**
2906 * Saves the state of a timer to a saved state.
2907 *
2908 * @returns VBox status code.
2909 * @param pVM The cross context VM structure.
2910 * @param hTimer Timer to save.
2911 * @param pSSM Save State Manager handle.
2912 */
2913VMMR3DECL(int) TMR3TimerSave(PVM pVM, TMTIMERHANDLE hTimer, PSSMHANDLE pSSM)
2914{
2915 VM_ASSERT_EMT(pVM);
2916 TMTIMER_HANDLE_TO_VARS_RETURN(pVM, hTimer); /* => pTimer, pQueueCC, pQueue, idxTimer, idxQueue */
2917 LogFlow(("TMR3TimerSave: %p:{enmState=%s, .szName='%s'} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->szName, pSSM));
2918
2919 switch (pTimer->enmState)
2920 {
2921 case TMTIMERSTATE_STOPPED:
2922 case TMTIMERSTATE_PENDING_STOP:
2923 case TMTIMERSTATE_PENDING_STOP_SCHEDULE:
2924 return SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_STOP);
2925
2926 case TMTIMERSTATE_PENDING_SCHEDULE_SET_EXPIRE:
2927 case TMTIMERSTATE_PENDING_RESCHEDULE_SET_EXPIRE:
2928 AssertMsgFailed(("u64Expire is being updated! (%s)\n", pTimer->szName));
2929 if (!RTThreadYield())
2930 RTThreadSleep(1);
2931 RT_FALL_THRU();
2932 case TMTIMERSTATE_ACTIVE:
2933 case TMTIMERSTATE_PENDING_SCHEDULE:
2934 case TMTIMERSTATE_PENDING_RESCHEDULE:
2935 SSMR3PutU8(pSSM, TMTIMERSTATE_SAVED_PENDING_SCHEDULE);
2936 return SSMR3PutU64(pSSM, pTimer->u64Expire);
2937
2938 case TMTIMERSTATE_EXPIRED_GET_UNLINK:
2939 case TMTIMERSTATE_EXPIRED_DELIVER:
2940 case TMTIMERSTATE_DESTROY:
2941 case TMTIMERSTATE_FREE:
2942 case TMTIMERSTATE_INVALID:
2943 AssertMsgFailed(("Invalid timer state %d %s (%s)\n", pTimer->enmState, tmTimerState(pTimer->enmState), pTimer->szName));
2944 return SSMR3HandleSetStatus(pSSM, VERR_TM_INVALID_STATE);
2945 }
2946
2947 AssertMsgFailed(("Unknown timer state %d (%s)\n", pTimer->enmState, pTimer->szName));
2948 return SSMR3HandleSetStatus(pSSM, VERR_TM_UNKNOWN_STATE);
2949}
2950
2951
2952/**
2953 * Loads the state of a timer from a saved state.
2954 *
2955 * @returns VBox status code.
2956 * @param pVM The cross context VM structure.
2957 * @param hTimer Handle of Timer to restore.
2958 * @param pSSM Save State Manager handle.
2959 */
2960VMMR3DECL(int) TMR3TimerLoad(PVM pVM, TMTIMERHANDLE hTimer, PSSMHANDLE pSSM)
2961{
2962 VM_ASSERT_EMT(pVM);
2963 TMTIMER_HANDLE_TO_VARS_RETURN(pVM, hTimer); /* => pTimer, pQueueCC, pQueue, idxTimer, idxQueue */
2964 Assert(pSSM);
2965 LogFlow(("TMR3TimerLoad: %p:{enmState=%s, .szName='%s'} pSSM=%p\n", pTimer, tmTimerState(pTimer->enmState), pTimer->szName, pSSM));
2966
2967 /*
2968 * Load the state and validate it.
2969 */
2970 uint8_t u8State;
2971 int rc = SSMR3GetU8(pSSM, &u8State);
2972 if (RT_FAILURE(rc))
2973 return rc;
2974
2975 /* TMTIMERSTATE_SAVED_XXX: Workaround for accidental state shift in r47786 (2009-05-26 19:12:12). */
2976 if ( u8State == TMTIMERSTATE_SAVED_PENDING_STOP + 1
2977 || u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE + 1)
2978 u8State--;
2979
2980 if ( u8State != TMTIMERSTATE_SAVED_PENDING_STOP
2981 && u8State != TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2982 {
2983 AssertLogRelMsgFailed(("u8State=%d\n", u8State));
2984 return SSMR3HandleSetStatus(pSSM, VERR_TM_LOAD_STATE);
2985 }
2986
2987 /* Enter the critical sections to make TMTimerSet/Stop happy. */
2988 if (pQueue->enmClock == TMCLOCK_VIRTUAL_SYNC)
2989 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED);
2990 PPDMCRITSECT pCritSect = pTimer->pCritSect;
2991 if (pCritSect)
2992 PDMCritSectEnter(pVM, pCritSect, VERR_IGNORED);
2993
2994 if (u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
2995 {
2996 /*
2997 * Load the expire time.
2998 */
2999 uint64_t u64Expire;
3000 rc = SSMR3GetU64(pSSM, &u64Expire);
3001 if (RT_FAILURE(rc))
3002 return rc;
3003
3004 /*
3005 * Set it.
3006 */
3007 Log(("u8State=%d u64Expire=%llu\n", u8State, u64Expire));
3008 rc = TMTimerSet(pVM, hTimer, u64Expire);
3009 }
3010 else
3011 {
3012 /*
3013 * Stop it.
3014 */
3015 Log(("u8State=%d\n", u8State));
3016 rc = TMTimerStop(pVM, hTimer);
3017 }
3018
3019 if (pCritSect)
3020 PDMCritSectLeave(pVM, pCritSect);
3021 if (pQueue->enmClock == TMCLOCK_VIRTUAL_SYNC)
3022 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
3023
3024 /*
3025 * On failure set SSM status.
3026 */
3027 if (RT_FAILURE(rc))
3028 rc = SSMR3HandleSetStatus(pSSM, rc);
3029 return rc;
3030}
3031
3032
3033/**
3034 * Skips the state of a timer in a given saved state.
3035 *
3036 * @returns VBox status.
3037 * @param pSSM Save State Manager handle.
3038 * @param pfActive Where to store whether the timer was active
3039 * when the state was saved.
3040 */
3041VMMR3DECL(int) TMR3TimerSkip(PSSMHANDLE pSSM, bool *pfActive)
3042{
3043 Assert(pSSM); AssertPtr(pfActive);
3044 LogFlow(("TMR3TimerSkip: pSSM=%p pfActive=%p\n", pSSM, pfActive));
3045
3046 /*
3047 * Load the state and validate it.
3048 */
3049 uint8_t u8State;
3050 int rc = SSMR3GetU8(pSSM, &u8State);
3051 if (RT_FAILURE(rc))
3052 return rc;
3053
3054 /* TMTIMERSTATE_SAVED_XXX: Workaround for accidental state shift in r47786 (2009-05-26 19:12:12). */
3055 if ( u8State == TMTIMERSTATE_SAVED_PENDING_STOP + 1
3056 || u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE + 1)
3057 u8State--;
3058
3059 if ( u8State != TMTIMERSTATE_SAVED_PENDING_STOP
3060 && u8State != TMTIMERSTATE_SAVED_PENDING_SCHEDULE)
3061 {
3062 AssertLogRelMsgFailed(("u8State=%d\n", u8State));
3063 return SSMR3HandleSetStatus(pSSM, VERR_TM_LOAD_STATE);
3064 }
3065
3066 *pfActive = (u8State == TMTIMERSTATE_SAVED_PENDING_SCHEDULE);
3067 if (*pfActive)
3068 {
3069 /*
3070 * Load the expire time.
3071 */
3072 uint64_t u64Expire;
3073 rc = SSMR3GetU64(pSSM, &u64Expire);
3074 }
3075
3076 return rc;
3077}
3078
3079
3080/**
3081 * Associates a critical section with a timer.
3082 *
3083 * The critical section will be entered prior to doing the timer call back, thus
3084 * avoiding potential races between the timer thread and other threads trying to
3085 * stop or adjust the timer expiration while it's being delivered. The timer
3086 * thread will leave the critical section when the timer callback returns.
3087 *
3088 * In strict builds, ownership of the critical section will be asserted by
3089 * TMTimerSet, TMTimerStop, TMTimerGetExpire and TMTimerDestroy (when called at
3090 * runtime).
3091 *
3092 * @retval VINF_SUCCESS on success.
3093 * @retval VERR_INVALID_HANDLE if the timer handle is NULL or invalid
3094 * (asserted).
3095 * @retval VERR_INVALID_PARAMETER if pCritSect is NULL or has an invalid magic
3096 * (asserted).
3097 * @retval VERR_ALREADY_EXISTS if a critical section was already associated
3098 * with the timer (asserted).
3099 * @retval VERR_INVALID_STATE if the timer isn't stopped.
3100 *
3101 * @param pVM The cross context VM structure.
3102 * @param hTimer The timer handle.
3103 * @param pCritSect The critical section. The caller must make sure this
3104 * is around for the life time of the timer.
3105 *
3106 * @thread Any, but the caller is responsible for making sure the timer is not
3107 * active.
3108 */
3109VMMR3DECL(int) TMR3TimerSetCritSect(PVM pVM, TMTIMERHANDLE hTimer, PPDMCRITSECT pCritSect)
3110{
3111 TMTIMER_HANDLE_TO_VARS_RETURN(pVM, hTimer); /* => pTimer, pQueueCC, pQueue, idxTimer, idxQueue */
3112 AssertPtrReturn(pCritSect, VERR_INVALID_PARAMETER);
3113 const char *pszName = PDMR3CritSectName(pCritSect); /* exploited for validation */
3114 AssertReturn(pszName, VERR_INVALID_PARAMETER);
3115 AssertReturn(!pTimer->pCritSect, VERR_ALREADY_EXISTS);
3116 AssertReturn(pTimer->enmState == TMTIMERSTATE_STOPPED, VERR_INVALID_STATE);
3117 LogFlow(("pTimer=%p (%s) pCritSect=%p (%s)\n", pTimer, pTimer->szName, pCritSect, pszName));
3118
3119 pTimer->pCritSect = pCritSect;
3120 return VINF_SUCCESS;
3121}
3122
3123
3124/**
3125 * Get the real world UTC time adjusted for VM lag.
3126 *
3127 * @returns pTime.
3128 * @param pVM The cross context VM structure.
3129 * @param pTime Where to store the time.
3130 */
3131VMMR3_INT_DECL(PRTTIMESPEC) TMR3UtcNow(PVM pVM, PRTTIMESPEC pTime)
3132{
3133 /*
3134 * Get a stable set of VirtualSync parameters and calc the lag.
3135 */
3136 uint64_t offVirtualSync;
3137 uint64_t offVirtualSyncGivenUp;
3138 do
3139 {
3140 offVirtualSync = ASMAtomicReadU64(&pVM->tm.s.offVirtualSync);
3141 offVirtualSyncGivenUp = ASMAtomicReadU64((uint64_t volatile *)&pVM->tm.s.offVirtualSyncGivenUp);
3142 } while (ASMAtomicReadU64(&pVM->tm.s.offVirtualSync) != offVirtualSync);
3143
3144 Assert(offVirtualSync >= offVirtualSyncGivenUp);
3145 uint64_t const offLag = offVirtualSync - offVirtualSyncGivenUp;
3146
3147 /*
3148 * Get current time and adjust for virtual sync lag and do time displacement.
3149 */
3150 RTTimeNow(pTime);
3151 RTTimeSpecSubNano(pTime, offLag);
3152 RTTimeSpecAddNano(pTime, pVM->tm.s.offUTC);
3153
3154 /*
3155 * Log details if the time changed radically (also triggers on first call).
3156 */
3157 int64_t nsPrev = ASMAtomicXchgS64(&pVM->tm.s.nsLastUtcNow, RTTimeSpecGetNano(pTime));
3158 int64_t cNsDelta = RTTimeSpecGetNano(pTime) - nsPrev;
3159 if ((uint64_t)RT_ABS(cNsDelta) > RT_NS_1HOUR / 2)
3160 {
3161 RTTIMESPEC NowAgain;
3162 RTTimeNow(&NowAgain);
3163 LogRel(("TMR3UtcNow: nsNow=%'RI64 nsPrev=%'RI64 -> cNsDelta=%'RI64 (offLag=%'RI64 offVirtualSync=%'RU64 offVirtualSyncGivenUp=%'RU64, NowAgain=%'RI64)\n",
3164 RTTimeSpecGetNano(pTime), nsPrev, cNsDelta, offLag, offVirtualSync, offVirtualSyncGivenUp, RTTimeSpecGetNano(&NowAgain)));
3165 if (pVM->tm.s.pszUtcTouchFileOnJump && nsPrev != 0)
3166 {
3167 RTFILE hFile;
3168 int rc = RTFileOpen(&hFile, pVM->tm.s.pszUtcTouchFileOnJump,
3169 RTFILE_O_WRITE | RTFILE_O_APPEND | RTFILE_O_OPEN_CREATE | RTFILE_O_DENY_NONE);
3170 if (RT_SUCCESS(rc))
3171 {
3172 char szMsg[256];
3173 size_t cch;
3174 cch = RTStrPrintf(szMsg, sizeof(szMsg),
3175 "TMR3UtcNow: nsNow=%'RI64 nsPrev=%'RI64 -> cNsDelta=%'RI64 (offLag=%'RI64 offVirtualSync=%'RU64 offVirtualSyncGivenUp=%'RU64, NowAgain=%'RI64)\n",
3176 RTTimeSpecGetNano(pTime), nsPrev, cNsDelta, offLag, offVirtualSync, offVirtualSyncGivenUp, RTTimeSpecGetNano(&NowAgain));
3177 RTFileWrite(hFile, szMsg, cch, NULL);
3178 RTFileClose(hFile);
3179 }
3180 }
3181 }
3182
3183 return pTime;
3184}
3185
3186
3187/**
3188 * Pauses all clocks except TMCLOCK_REAL.
3189 *
3190 * @returns VBox status code, all errors are asserted.
3191 * @param pVM The cross context VM structure.
3192 * @param pVCpu The cross context virtual CPU structure.
3193 * @thread EMT corresponding to Pointer to the VMCPU.
3194 */
3195VMMR3DECL(int) TMR3NotifySuspend(PVM pVM, PVMCPU pVCpu)
3196{
3197 VMCPU_ASSERT_EMT(pVCpu);
3198 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED); /* Paranoia: Exploiting the virtual sync lock here. */
3199
3200 /*
3201 * The shared virtual clock (includes virtual sync which is tied to it).
3202 */
3203 int rc = tmVirtualPauseLocked(pVM);
3204 AssertRCReturnStmt(rc, PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock), rc);
3205
3206 /*
3207 * Pause the TSC last since it is normally linked to the virtual
3208 * sync clock, so the above code may actually stop both clocks.
3209 */
3210 if (!pVM->tm.s.fTSCTiedToExecution)
3211 {
3212 rc = tmCpuTickPauseLocked(pVM, pVCpu);
3213 AssertRCReturnStmt(rc, PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock), rc);
3214 }
3215
3216#ifndef VBOX_WITHOUT_NS_ACCOUNTING
3217 /*
3218 * Update cNsTotal and stats.
3219 */
3220 Assert(!pVCpu->tm.s.fSuspended);
3221 uint64_t const cNsTotalNew = RTTimeNanoTS() - pVCpu->tm.s.nsStartTotal;
3222 uint64_t const cNsOtherNew = cNsTotalNew - pVCpu->tm.s.cNsExecuting - pVCpu->tm.s.cNsHalted;
3223
3224# if defined(VBOX_WITH_STATISTICS) || defined(VBOX_WITH_NS_ACCOUNTING_STATS)
3225 STAM_REL_COUNTER_ADD(&pVCpu->tm.s.StatNsTotal, cNsTotalNew - pVCpu->tm.s.cNsTotalStat);
3226 int64_t const cNsOtherNewDelta = cNsOtherNew - pVCpu->tm.s.cNsOtherStat;
3227 if (cNsOtherNewDelta > 0)
3228 STAM_REL_COUNTER_ADD(&pVCpu->tm.s.StatNsOther, (uint64_t)cNsOtherNewDelta);
3229# endif
3230
3231 uint32_t uGen = ASMAtomicIncU32(&pVCpu->tm.s.uTimesGen); Assert(uGen & 1);
3232 pVCpu->tm.s.nsStartTotal = cNsTotalNew;
3233 pVCpu->tm.s.fSuspended = true;
3234 pVCpu->tm.s.cNsTotalStat = cNsTotalNew;
3235 pVCpu->tm.s.cNsOtherStat = cNsOtherNew;
3236 ASMAtomicWriteU32(&pVCpu->tm.s.uTimesGen, (uGen | 1) + 1);
3237#endif
3238
3239 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
3240 return VINF_SUCCESS;
3241}
3242
3243
3244/**
3245 * Resumes all clocks except TMCLOCK_REAL.
3246 *
3247 * @returns VBox status code, all errors are asserted.
3248 * @param pVM The cross context VM structure.
3249 * @param pVCpu The cross context virtual CPU structure.
3250 * @thread EMT corresponding to Pointer to the VMCPU.
3251 */
3252VMMR3DECL(int) TMR3NotifyResume(PVM pVM, PVMCPU pVCpu)
3253{
3254 VMCPU_ASSERT_EMT(pVCpu);
3255 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED); /* Paranoia: Exploiting the virtual sync lock here. */
3256
3257#ifndef VBOX_WITHOUT_NS_ACCOUNTING
3258 /*
3259 * Set u64NsTsStartTotal. There is no need to back this out if either of
3260 * the two calls below fail.
3261 */
3262 uint32_t uGen = ASMAtomicIncU32(&pVCpu->tm.s.uTimesGen); Assert(uGen & 1);
3263 pVCpu->tm.s.nsStartTotal = RTTimeNanoTS() - pVCpu->tm.s.nsStartTotal;
3264 pVCpu->tm.s.fSuspended = false;
3265 ASMAtomicWriteU32(&pVCpu->tm.s.uTimesGen, (uGen | 1) + 1);
3266#endif
3267
3268 /*
3269 * Resume the TSC first since it is normally linked to the virtual sync
3270 * clock, so it may actually not be resumed until we've executed the code
3271 * below.
3272 */
3273 if (!pVM->tm.s.fTSCTiedToExecution)
3274 {
3275 int rc = tmCpuTickResumeLocked(pVM, pVCpu);
3276 AssertRCReturnStmt(rc, PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock), rc);
3277 }
3278
3279 /*
3280 * The shared virtual clock (includes virtual sync which is tied to it).
3281 */
3282 int rc = tmVirtualResumeLocked(pVM);
3283
3284 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
3285 return rc;
3286}
3287
3288
3289/**
3290 * Sets the warp drive percent of the virtual time.
3291 *
3292 * @returns VBox status code.
3293 * @param pUVM The user mode VM structure.
3294 * @param u32Percent The new percentage. 100 means normal operation.
3295 */
3296VMMDECL(int) TMR3SetWarpDrive(PUVM pUVM, uint32_t u32Percent)
3297{
3298 return VMR3ReqPriorityCallWaitU(pUVM, VMCPUID_ANY, (PFNRT)tmR3SetWarpDrive, 2, pUVM, u32Percent);
3299}
3300
3301
3302/**
3303 * EMT worker for TMR3SetWarpDrive.
3304 *
3305 * @returns VBox status code.
3306 * @param pUVM The user mode VM handle.
3307 * @param u32Percent See TMR3SetWarpDrive().
3308 * @internal
3309 */
3310static DECLCALLBACK(int) tmR3SetWarpDrive(PUVM pUVM, uint32_t u32Percent)
3311{
3312 PVM pVM = pUVM->pVM;
3313 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
3314 PVMCPU pVCpu = VMMGetCpu(pVM);
3315
3316 /*
3317 * Validate it.
3318 */
3319 AssertMsgReturn(u32Percent >= 2 && u32Percent <= 20000,
3320 ("%RX32 is not between 2 and 20000 (inclusive).\n", u32Percent),
3321 VERR_INVALID_PARAMETER);
3322
3323/** @todo This isn't a feature specific to virtual time, move the variables to
3324 * TM level and make it affect TMR3UTCNow as well! */
3325
3326 PDMCritSectEnter(pVM, &pVM->tm.s.VirtualSyncLock, VERR_IGNORED); /* Paranoia: Exploiting the virtual sync lock here. */
3327
3328 /*
3329 * If the time is running we'll have to pause it before we can change
3330 * the warp drive settings.
3331 */
3332 bool fPaused = !!pVM->tm.s.cVirtualTicking;
3333 if (fPaused) /** @todo this isn't really working, but wtf. */
3334 TMR3NotifySuspend(pVM, pVCpu);
3335
3336 /** @todo Should switch TM mode to virt-tsc-emulated if it isn't already! */
3337 pVM->tm.s.u32VirtualWarpDrivePercentage = u32Percent;
3338 pVM->tm.s.fVirtualWarpDrive = u32Percent != 100;
3339 LogRel(("TM: u32VirtualWarpDrivePercentage=%RI32 fVirtualWarpDrive=%RTbool\n",
3340 pVM->tm.s.u32VirtualWarpDrivePercentage, pVM->tm.s.fVirtualWarpDrive));
3341
3342 if (fPaused)
3343 TMR3NotifyResume(pVM, pVCpu);
3344
3345 PDMCritSectLeave(pVM, &pVM->tm.s.VirtualSyncLock);
3346 return VINF_SUCCESS;
3347}
3348
3349
3350/**
3351 * Gets the current TMCLOCK_VIRTUAL time without checking
3352 * timers or anything.
3353 *
3354 * @returns The timestamp.
3355 * @param pUVM The user mode VM structure.
3356 *
3357 * @remarks See TMVirtualGetNoCheck.
3358 */
3359VMMR3DECL(uint64_t) TMR3TimeVirtGet(PUVM pUVM)
3360{
3361 UVM_ASSERT_VALID_EXT_RETURN(pUVM, UINT64_MAX);
3362 PVM pVM = pUVM->pVM;
3363 VM_ASSERT_VALID_EXT_RETURN(pVM, UINT64_MAX);
3364 return TMVirtualGetNoCheck(pVM);
3365}
3366
3367
3368/**
3369 * Gets the current TMCLOCK_VIRTUAL time in milliseconds without checking
3370 * timers or anything.
3371 *
3372 * @returns The timestamp in milliseconds.
3373 * @param pUVM The user mode VM structure.
3374 *
3375 * @remarks See TMVirtualGetNoCheck.
3376 */
3377VMMR3DECL(uint64_t) TMR3TimeVirtGetMilli(PUVM pUVM)
3378{
3379 UVM_ASSERT_VALID_EXT_RETURN(pUVM, UINT64_MAX);
3380 PVM pVM = pUVM->pVM;
3381 VM_ASSERT_VALID_EXT_RETURN(pVM, UINT64_MAX);
3382 return TMVirtualToMilli(pVM, TMVirtualGetNoCheck(pVM));
3383}
3384
3385
3386/**
3387 * Gets the current TMCLOCK_VIRTUAL time in microseconds without checking
3388 * timers or anything.
3389 *
3390 * @returns The timestamp in microseconds.
3391 * @param pUVM The user mode VM structure.
3392 *
3393 * @remarks See TMVirtualGetNoCheck.
3394 */
3395VMMR3DECL(uint64_t) TMR3TimeVirtGetMicro(PUVM pUVM)
3396{
3397 UVM_ASSERT_VALID_EXT_RETURN(pUVM, UINT64_MAX);
3398 PVM pVM = pUVM->pVM;
3399 VM_ASSERT_VALID_EXT_RETURN(pVM, UINT64_MAX);
3400 return TMVirtualToMicro(pVM, TMVirtualGetNoCheck(pVM));
3401}
3402
3403
3404/**
3405 * Gets the current TMCLOCK_VIRTUAL time in nanoseconds without checking
3406 * timers or anything.
3407 *
3408 * @returns The timestamp in nanoseconds.
3409 * @param pUVM The user mode VM structure.
3410 *
3411 * @remarks See TMVirtualGetNoCheck.
3412 */
3413VMMR3DECL(uint64_t) TMR3TimeVirtGetNano(PUVM pUVM)
3414{
3415 UVM_ASSERT_VALID_EXT_RETURN(pUVM, UINT64_MAX);
3416 PVM pVM = pUVM->pVM;
3417 VM_ASSERT_VALID_EXT_RETURN(pVM, UINT64_MAX);
3418 return TMVirtualToNano(pVM, TMVirtualGetNoCheck(pVM));
3419}
3420
3421
3422/**
3423 * Gets the current warp drive percent.
3424 *
3425 * @returns The warp drive percent.
3426 * @param pUVM The user mode VM structure.
3427 */
3428VMMR3DECL(uint32_t) TMR3GetWarpDrive(PUVM pUVM)
3429{
3430 UVM_ASSERT_VALID_EXT_RETURN(pUVM, UINT32_MAX);
3431 PVM pVM = pUVM->pVM;
3432 VM_ASSERT_VALID_EXT_RETURN(pVM, UINT32_MAX);
3433 return pVM->tm.s.u32VirtualWarpDrivePercentage;
3434}
3435
3436
3437#if 0 /* unused - needs a little updating after @bugref{9941}*/
3438/**
3439 * Gets the performance information for one virtual CPU as seen by the VMM.
3440 *
3441 * The returned times covers the period where the VM is running and will be
3442 * reset when restoring a previous VM state (at least for the time being).
3443 *
3444 * @retval VINF_SUCCESS on success.
3445 * @retval VERR_NOT_IMPLEMENTED if not compiled in.
3446 * @retval VERR_INVALID_STATE if the VM handle is bad.
3447 * @retval VERR_INVALID_CPU_ID if idCpu is out of range.
3448 *
3449 * @param pVM The cross context VM structure.
3450 * @param idCpu The ID of the virtual CPU which times to get.
3451 * @param pcNsTotal Where to store the total run time (nano seconds) of
3452 * the CPU, i.e. the sum of the three other returns.
3453 * Optional.
3454 * @param pcNsExecuting Where to store the time (nano seconds) spent
3455 * executing guest code. Optional.
3456 * @param pcNsHalted Where to store the time (nano seconds) spent
3457 * halted. Optional
3458 * @param pcNsOther Where to store the time (nano seconds) spent
3459 * preempted by the host scheduler, on virtualization
3460 * overhead and on other tasks.
3461 */
3462VMMR3DECL(int) TMR3GetCpuLoadTimes(PVM pVM, VMCPUID idCpu, uint64_t *pcNsTotal, uint64_t *pcNsExecuting,
3463 uint64_t *pcNsHalted, uint64_t *pcNsOther)
3464{
3465 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_STATE);
3466 AssertReturn(idCpu < pVM->cCpus, VERR_INVALID_CPU_ID);
3467
3468#ifndef VBOX_WITHOUT_NS_ACCOUNTING
3469 /*
3470 * Get a stable result set.
3471 * This should be way quicker than an EMT request.
3472 */
3473 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
3474 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
3475 uint64_t cNsTotal = pVCpu->tm.s.cNsTotal;
3476 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
3477 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
3478 uint64_t cNsOther = pVCpu->tm.s.cNsOther;
3479 while ( (uTimesGen & 1) /* update in progress */
3480 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen))
3481 {
3482 RTThreadYield();
3483 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
3484 cNsTotal = pVCpu->tm.s.cNsTotal;
3485 cNsExecuting = pVCpu->tm.s.cNsExecuting;
3486 cNsHalted = pVCpu->tm.s.cNsHalted;
3487 cNsOther = pVCpu->tm.s.cNsOther;
3488 }
3489
3490 /*
3491 * Fill in the return values.
3492 */
3493 if (pcNsTotal)
3494 *pcNsTotal = cNsTotal;
3495 if (pcNsExecuting)
3496 *pcNsExecuting = cNsExecuting;
3497 if (pcNsHalted)
3498 *pcNsHalted = cNsHalted;
3499 if (pcNsOther)
3500 *pcNsOther = cNsOther;
3501
3502 return VINF_SUCCESS;
3503
3504#else
3505 return VERR_NOT_IMPLEMENTED;
3506#endif
3507}
3508#endif /* unused */
3509
3510
3511/**
3512 * Gets the performance information for one virtual CPU as seen by the VMM in
3513 * percents.
3514 *
3515 * The returned times covers the period where the VM is running and will be
3516 * reset when restoring a previous VM state (at least for the time being).
3517 *
3518 * @retval VINF_SUCCESS on success.
3519 * @retval VERR_NOT_IMPLEMENTED if not compiled in.
3520 * @retval VERR_INVALID_VM_HANDLE if the VM handle is bad.
3521 * @retval VERR_INVALID_CPU_ID if idCpu is out of range.
3522 *
3523 * @param pUVM The usermode VM structure.
3524 * @param idCpu The ID of the virtual CPU which times to get.
3525 * @param pcMsInterval Where to store the interval of the percentages in
3526 * milliseconds. Optional.
3527 * @param pcPctExecuting Where to return the percentage of time spent
3528 * executing guest code. Optional.
3529 * @param pcPctHalted Where to return the percentage of time spent halted.
3530 * Optional
3531 * @param pcPctOther Where to return the percentage of time spent
3532 * preempted by the host scheduler, on virtualization
3533 * overhead and on other tasks.
3534 */
3535VMMR3DECL(int) TMR3GetCpuLoadPercents(PUVM pUVM, VMCPUID idCpu, uint64_t *pcMsInterval, uint8_t *pcPctExecuting,
3536 uint8_t *pcPctHalted, uint8_t *pcPctOther)
3537{
3538 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
3539 PVM pVM = pUVM->pVM;
3540 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
3541 AssertReturn(idCpu == VMCPUID_ALL || idCpu < pVM->cCpus, VERR_INVALID_CPU_ID);
3542
3543#ifndef VBOX_WITHOUT_NS_ACCOUNTING
3544 TMCPULOADSTATE volatile *pState;
3545 if (idCpu == VMCPUID_ALL)
3546 pState = &pVM->tm.s.CpuLoad;
3547 else
3548 pState = &pVM->apCpusR3[idCpu]->tm.s.CpuLoad;
3549
3550 if (pcMsInterval)
3551 *pcMsInterval = RT_MS_1SEC;
3552 if (pcPctExecuting)
3553 *pcPctExecuting = pState->cPctExecuting;
3554 if (pcPctHalted)
3555 *pcPctHalted = pState->cPctHalted;
3556 if (pcPctOther)
3557 *pcPctOther = pState->cPctOther;
3558
3559 return VINF_SUCCESS;
3560
3561#else
3562 RT_NOREF(pcMsInterval, pcPctExecuting, pcPctHalted, pcPctOther);
3563 return VERR_NOT_IMPLEMENTED;
3564#endif
3565}
3566
3567#ifndef VBOX_WITHOUT_NS_ACCOUNTING
3568
3569/**
3570 * Helper for tmR3CpuLoadTimer.
3571 * @returns
3572 * @param pState The state to update.
3573 * @param cNsTotal Total time.
3574 * @param cNsExecuting Time executing.
3575 * @param cNsHalted Time halted.
3576 */
3577DECLINLINE(void) tmR3CpuLoadTimerMakeUpdate(PTMCPULOADSTATE pState, uint64_t cNsTotal, uint64_t cNsExecuting, uint64_t cNsHalted)
3578{
3579 /* Calc & update deltas */
3580 uint64_t cNsTotalDelta = cNsTotal - pState->cNsPrevTotal;
3581 uint64_t cNsExecutingDelta = cNsExecuting - pState->cNsPrevExecuting;
3582 uint64_t cNsHaltedDelta = cNsHalted - pState->cNsPrevHalted;
3583
3584 if (cNsExecutingDelta + cNsHaltedDelta <= cNsTotalDelta)
3585 { /* likely */ }
3586 else
3587 {
3588 /* Just adjust the executing and halted values down to match the total delta. */
3589 uint64_t const cNsExecAndHalted = cNsExecutingDelta + cNsHaltedDelta;
3590 uint64_t const cNsAdjust = cNsExecAndHalted - cNsTotalDelta + cNsTotalDelta / 64;
3591 cNsExecutingDelta -= (cNsAdjust * cNsExecutingDelta + cNsExecAndHalted - 1) / cNsExecAndHalted;
3592 cNsHaltedDelta -= (cNsAdjust * cNsHaltedDelta + cNsExecAndHalted - 1) / cNsExecAndHalted;
3593 /*Assert(cNsExecutingDelta + cNsHaltedDelta <= cNsTotalDelta); - annoying when debugging */
3594 }
3595
3596 pState->cNsPrevExecuting = cNsExecuting;
3597 pState->cNsPrevHalted = cNsHalted;
3598 pState->cNsPrevTotal = cNsTotal;
3599
3600 /* Calc pcts. */
3601 uint8_t cPctExecuting, cPctHalted, cPctOther;
3602 if (!cNsTotalDelta)
3603 {
3604 cPctExecuting = 0;
3605 cPctHalted = 100;
3606 cPctOther = 0;
3607 }
3608 else if (cNsTotalDelta < UINT64_MAX / 4)
3609 {
3610 cPctExecuting = (uint8_t)(cNsExecutingDelta * 100 / cNsTotalDelta);
3611 cPctHalted = (uint8_t)(cNsHaltedDelta * 100 / cNsTotalDelta);
3612 cPctOther = (uint8_t)((cNsTotalDelta - cNsExecutingDelta - cNsHaltedDelta) * 100 / cNsTotalDelta);
3613 }
3614 else
3615 {
3616 cPctExecuting = 0;
3617 cPctHalted = 100;
3618 cPctOther = 0;
3619 }
3620
3621 /* Update percentages: */
3622 size_t idxHistory = pState->idxHistory + 1;
3623 if (idxHistory >= RT_ELEMENTS(pState->aHistory))
3624 idxHistory = 0;
3625
3626 pState->cPctExecuting = cPctExecuting;
3627 pState->cPctHalted = cPctHalted;
3628 pState->cPctOther = cPctOther;
3629
3630 pState->aHistory[idxHistory].cPctExecuting = cPctExecuting;
3631 pState->aHistory[idxHistory].cPctHalted = cPctHalted;
3632 pState->aHistory[idxHistory].cPctOther = cPctOther;
3633
3634 pState->idxHistory = (uint16_t)idxHistory;
3635 if (pState->cHistoryEntries < RT_ELEMENTS(pState->aHistory))
3636 pState->cHistoryEntries++;
3637}
3638
3639
3640/**
3641 * @callback_method_impl{FNTMTIMERINT,
3642 * Timer callback that calculates the CPU load since the last
3643 * time it was called.}
3644 */
3645static DECLCALLBACK(void) tmR3CpuLoadTimer(PVM pVM, TMTIMERHANDLE hTimer, void *pvUser)
3646{
3647 /*
3648 * Re-arm the timer first.
3649 */
3650 int rc = TMTimerSetMillies(pVM, hTimer, 1000);
3651 AssertLogRelRC(rc);
3652 NOREF(pvUser);
3653
3654 /*
3655 * Update the values for each CPU.
3656 */
3657 uint64_t cNsTotalAll = 0;
3658 uint64_t cNsExecutingAll = 0;
3659 uint64_t cNsHaltedAll = 0;
3660 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
3661 {
3662 PVMCPU pVCpu = pVM->apCpusR3[iCpu];
3663
3664 /* Try get a stable data set. */
3665 uint32_t cTries = 3;
3666 uint64_t nsNow = RTTimeNanoTS();
3667 uint32_t uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
3668 bool fSuspended = pVCpu->tm.s.fSuspended;
3669 uint64_t nsStartTotal = pVCpu->tm.s.nsStartTotal;
3670 uint64_t cNsExecuting = pVCpu->tm.s.cNsExecuting;
3671 uint64_t cNsHalted = pVCpu->tm.s.cNsHalted;
3672 while (RT_UNLIKELY( (uTimesGen & 1) /* update in progress */
3673 || uTimesGen != ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen)))
3674 {
3675 if (!--cTries)
3676 break;
3677 ASMNopPause();
3678 nsNow = RTTimeNanoTS();
3679 uTimesGen = ASMAtomicReadU32(&pVCpu->tm.s.uTimesGen);
3680 fSuspended = pVCpu->tm.s.fSuspended;
3681 nsStartTotal = pVCpu->tm.s.nsStartTotal;
3682 cNsExecuting = pVCpu->tm.s.cNsExecuting;
3683 cNsHalted = pVCpu->tm.s.cNsHalted;
3684 }
3685
3686 /* Totals */
3687 uint64_t cNsTotal = fSuspended ? nsStartTotal : nsNow - nsStartTotal;
3688 cNsTotalAll += cNsTotal;
3689 cNsExecutingAll += cNsExecuting;
3690 cNsHaltedAll += cNsHalted;
3691
3692 /* Calc the PCTs and update the state. */
3693 tmR3CpuLoadTimerMakeUpdate(&pVCpu->tm.s.CpuLoad, cNsTotal, cNsExecuting, cNsHalted);
3694
3695 /* Tell the VCpu to update the other and total stat members. */
3696 ASMAtomicWriteBool(&pVCpu->tm.s.fUpdateStats, true);
3697 }
3698
3699 /*
3700 * Update the value for all the CPUs.
3701 */
3702 tmR3CpuLoadTimerMakeUpdate(&pVM->tm.s.CpuLoad, cNsTotalAll, cNsExecutingAll, cNsHaltedAll);
3703
3704}
3705
3706#endif /* !VBOX_WITHOUT_NS_ACCOUNTING */
3707
3708
3709/**
3710 * @callback_method_impl{PFNVMMEMTRENDEZVOUS,
3711 * Worker for TMR3CpuTickParavirtEnable}
3712 */
3713static DECLCALLBACK(VBOXSTRICTRC) tmR3CpuTickParavirtEnable(PVM pVM, PVMCPU pVCpuEmt, void *pvData)
3714{
3715 AssertPtr(pVM); Assert(pVM->tm.s.fTSCModeSwitchAllowed); NOREF(pVCpuEmt); NOREF(pvData);
3716 Assert(pVM->tm.s.enmTSCMode != TMTSCMODE_NATIVE_API); /** @todo figure out NEM/win and paravirt */
3717 Assert(tmR3HasFixedTSC(pVM));
3718
3719 if (pVM->tm.s.enmTSCMode != TMTSCMODE_REAL_TSC_OFFSET)
3720 {
3721 /*
3722 * The return value of TMCpuTickGet() and the guest's TSC value for each
3723 * CPU must remain constant across the TM TSC mode-switch. Thus we have
3724 * the following equation (new/old signifies the new/old tsc modes):
3725 * uNewTsc = uOldTsc
3726 *
3727 * Where (see tmCpuTickGetInternal):
3728 * uOldTsc = uRawOldTsc - offTscRawSrcOld
3729 * uNewTsc = uRawNewTsc - offTscRawSrcNew
3730 *
3731 * Solve it for offTscRawSrcNew without replacing uOldTsc:
3732 * uRawNewTsc - offTscRawSrcNew = uOldTsc
3733 * => -offTscRawSrcNew = uOldTsc - uRawNewTsc
3734 * => offTscRawSrcNew = uRawNewTsc - uOldTsc
3735 */
3736 uint64_t uRawOldTsc = tmR3CpuTickGetRawVirtualNoCheck(pVM);
3737 uint64_t uRawNewTsc = SUPReadTsc();
3738 uint32_t cCpus = pVM->cCpus;
3739 for (uint32_t i = 0; i < cCpus; i++)
3740 {
3741 PVMCPU pVCpu = pVM->apCpusR3[i];
3742 uint64_t uOldTsc = uRawOldTsc - pVCpu->tm.s.offTSCRawSrc;
3743 pVCpu->tm.s.offTSCRawSrc = uRawNewTsc - uOldTsc;
3744 Assert(uRawNewTsc - pVCpu->tm.s.offTSCRawSrc >= uOldTsc); /* paranoia^256 */
3745 }
3746
3747 LogRel(("TM: Switching TSC mode from '%s' to '%s'\n", tmR3GetTSCModeNameEx(pVM->tm.s.enmTSCMode),
3748 tmR3GetTSCModeNameEx(TMTSCMODE_REAL_TSC_OFFSET)));
3749 pVM->tm.s.enmTSCMode = TMTSCMODE_REAL_TSC_OFFSET;
3750 }
3751 return VINF_SUCCESS;
3752}
3753
3754
3755/**
3756 * Notify TM that the guest has enabled usage of a paravirtualized TSC.
3757 *
3758 * This may perform a EMT rendezvous and change the TSC virtualization mode.
3759 *
3760 * @returns VBox status code.
3761 * @param pVM The cross context VM structure.
3762 */
3763VMMR3_INT_DECL(int) TMR3CpuTickParavirtEnable(PVM pVM)
3764{
3765 int rc = VINF_SUCCESS;
3766 if (pVM->tm.s.fTSCModeSwitchAllowed)
3767 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ONCE, tmR3CpuTickParavirtEnable, NULL);
3768 else
3769 LogRel(("TM: Host/VM is not suitable for using TSC mode '%s', request to change TSC mode ignored\n",
3770 tmR3GetTSCModeNameEx(TMTSCMODE_REAL_TSC_OFFSET)));
3771 pVM->tm.s.fParavirtTscEnabled = true;
3772 return rc;
3773}
3774
3775
3776/**
3777 * @callback_method_impl{PFNVMMEMTRENDEZVOUS,
3778 * Worker for TMR3CpuTickParavirtDisable}
3779 */
3780static DECLCALLBACK(VBOXSTRICTRC) tmR3CpuTickParavirtDisable(PVM pVM, PVMCPU pVCpuEmt, void *pvData)
3781{
3782 AssertPtr(pVM); Assert(pVM->tm.s.fTSCModeSwitchAllowed); NOREF(pVCpuEmt);
3783 RT_NOREF1(pvData);
3784
3785 if ( pVM->tm.s.enmTSCMode == TMTSCMODE_REAL_TSC_OFFSET
3786 && pVM->tm.s.enmTSCMode != pVM->tm.s.enmOriginalTSCMode)
3787 {
3788 /*
3789 * See tmR3CpuTickParavirtEnable for an explanation of the conversion math.
3790 */
3791 uint64_t uRawOldTsc = SUPReadTsc();
3792 uint64_t uRawNewTsc = tmR3CpuTickGetRawVirtualNoCheck(pVM);
3793 uint32_t cCpus = pVM->cCpus;
3794 for (uint32_t i = 0; i < cCpus; i++)
3795 {
3796 PVMCPU pVCpu = pVM->apCpusR3[i];
3797 uint64_t uOldTsc = uRawOldTsc - pVCpu->tm.s.offTSCRawSrc;
3798 pVCpu->tm.s.offTSCRawSrc = uRawNewTsc - uOldTsc;
3799 Assert(uRawNewTsc - pVCpu->tm.s.offTSCRawSrc >= uOldTsc); /* paranoia^256 */
3800
3801 /* Update the last-seen tick here as we havent't been updating it (as we don't
3802 need it) while in pure TSC-offsetting mode. */
3803 pVCpu->tm.s.u64TSCLastSeen = uOldTsc;
3804 }
3805
3806 LogRel(("TM: Switching TSC mode from '%s' to '%s'\n", tmR3GetTSCModeNameEx(pVM->tm.s.enmTSCMode),
3807 tmR3GetTSCModeNameEx(pVM->tm.s.enmOriginalTSCMode)));
3808 pVM->tm.s.enmTSCMode = pVM->tm.s.enmOriginalTSCMode;
3809 }
3810 return VINF_SUCCESS;
3811}
3812
3813
3814/**
3815 * Notify TM that the guest has disabled usage of a paravirtualized TSC.
3816 *
3817 * If TMR3CpuTickParavirtEnable() changed the TSC virtualization mode, this will
3818 * perform an EMT rendezvous to revert those changes.
3819 *
3820 * @returns VBox status code.
3821 * @param pVM The cross context VM structure.
3822 */
3823VMMR3_INT_DECL(int) TMR3CpuTickParavirtDisable(PVM pVM)
3824{
3825 int rc = VINF_SUCCESS;
3826 if (pVM->tm.s.fTSCModeSwitchAllowed)
3827 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ONCE, tmR3CpuTickParavirtDisable, NULL);
3828 pVM->tm.s.fParavirtTscEnabled = false;
3829 return rc;
3830}
3831
3832
3833/**
3834 * Check whether the guest can be presented a fixed rate & monotonic TSC.
3835 *
3836 * @returns true if TSC is stable, false otherwise.
3837 * @param pVM The cross context VM structure.
3838 * @param fWithParavirtEnabled Whether it's fixed & monotonic when
3839 * paravirt. TSC is enabled or not.
3840 *
3841 * @remarks Must be called only after TMR3InitFinalize().
3842 */
3843VMMR3_INT_DECL(bool) TMR3CpuTickIsFixedRateMonotonic(PVM pVM, bool fWithParavirtEnabled)
3844{
3845 /** @todo figure out what exactly we want here later. */
3846 NOREF(fWithParavirtEnabled);
3847 PSUPGLOBALINFOPAGE pGip;
3848 return tmR3HasFixedTSC(pVM) /* Host has fixed-rate TSC. */
3849 && ( (pGip = g_pSUPGlobalInfoPage) == NULL /* Can be NULL in driverless mode. */
3850 || (pGip->u32Mode != SUPGIPMODE_ASYNC_TSC)); /* GIP thinks it's monotonic. */
3851}
3852
3853
3854/**
3855 * Gets the 5 char clock name for the info tables.
3856 *
3857 * @returns The name.
3858 * @param enmClock The clock.
3859 */
3860DECLINLINE(const char *) tmR3Get5CharClockName(TMCLOCK enmClock)
3861{
3862 switch (enmClock)
3863 {
3864 case TMCLOCK_REAL: return "Real ";
3865 case TMCLOCK_VIRTUAL: return "Virt ";
3866 case TMCLOCK_VIRTUAL_SYNC: return "VrSy ";
3867 case TMCLOCK_TSC: return "TSC ";
3868 default: return "Bad ";
3869 }
3870}
3871
3872
3873/**
3874 * Display all timers.
3875 *
3876 * @param pVM The cross context VM structure.
3877 * @param pHlp The info helpers.
3878 * @param pszArgs Arguments, ignored.
3879 */
3880static DECLCALLBACK(void) tmR3TimerInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3881{
3882 NOREF(pszArgs);
3883 pHlp->pfnPrintf(pHlp,
3884 "Timers (pVM=%p)\n"
3885 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
3886 pVM,
3887 sizeof(RTR3PTR) * 2, "pTimerR3 ",
3888 sizeof(int32_t) * 2, "offNext ",
3889 sizeof(int32_t) * 2, "offPrev ",
3890 sizeof(int32_t) * 2, "offSched ",
3891 "Time",
3892 "Expire",
3893 "HzHint",
3894 "State");
3895 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
3896 {
3897 PTMTIMERQUEUE const pQueue = &pVM->tm.s.aTimerQueues[idxQueue];
3898 const char * const pszClock = tmR3Get5CharClockName(pQueue->enmClock);
3899 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
3900 for (uint32_t idxTimer = 0; idxTimer < pQueue->cTimersAlloc; idxTimer++)
3901 {
3902 PTMTIMER pTimer = &pQueue->paTimers[idxTimer];
3903 TMTIMERSTATE enmState = pTimer->enmState;
3904 if (enmState < TMTIMERSTATE_DESTROY && enmState > TMTIMERSTATE_INVALID)
3905 pHlp->pfnPrintf(pHlp,
3906 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
3907 pTimer,
3908 pTimer->idxNext,
3909 pTimer->idxPrev,
3910 pTimer->idxScheduleNext,
3911 pszClock,
3912 TMTimerGet(pVM, pTimer->hSelf),
3913 pTimer->u64Expire,
3914 pTimer->uHzHint,
3915 tmTimerState(enmState),
3916 pTimer->szName);
3917 }
3918 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
3919 }
3920}
3921
3922
3923/**
3924 * Display all active timers.
3925 *
3926 * @param pVM The cross context VM structure.
3927 * @param pHlp The info helpers.
3928 * @param pszArgs Arguments, ignored.
3929 */
3930static DECLCALLBACK(void) tmR3TimerInfoActive(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3931{
3932 NOREF(pszArgs);
3933 pHlp->pfnPrintf(pHlp,
3934 "Active Timers (pVM=%p)\n"
3935 "%.*s %.*s %.*s %.*s Clock %18s %18s %6s %-25s Description\n",
3936 pVM,
3937 sizeof(RTR3PTR) * 2, "pTimerR3 ",
3938 sizeof(int32_t) * 2, "offNext ",
3939 sizeof(int32_t) * 2, "offPrev ",
3940 sizeof(int32_t) * 2, "offSched ",
3941 "Time",
3942 "Expire",
3943 "HzHint",
3944 "State");
3945 for (uint32_t idxQueue = 0; idxQueue < RT_ELEMENTS(pVM->tm.s.aTimerQueues); idxQueue++)
3946 {
3947 PTMTIMERQUEUE const pQueue = &pVM->tm.s.aTimerQueues[idxQueue];
3948 const char * const pszClock = tmR3Get5CharClockName(pQueue->enmClock);
3949 PDMCritSectRwEnterShared(pVM, &pQueue->AllocLock, VERR_IGNORED);
3950 PDMCritSectEnter(pVM, &pQueue->TimerLock, VERR_IGNORED);
3951
3952 for (PTMTIMERR3 pTimer = tmTimerQueueGetHead(pQueue, pQueue);
3953 pTimer;
3954 pTimer = tmTimerGetNext(pQueue, pTimer))
3955 {
3956 pHlp->pfnPrintf(pHlp,
3957 "%p %08RX32 %08RX32 %08RX32 %s %18RU64 %18RU64 %6RU32 %-25s %s\n",
3958 pTimer,
3959 pTimer->idxNext,
3960 pTimer->idxPrev,
3961 pTimer->idxScheduleNext,
3962 pszClock,
3963 TMTimerGet(pVM, pTimer->hSelf),
3964 pTimer->u64Expire,
3965 pTimer->uHzHint,
3966 tmTimerState(pTimer->enmState),
3967 pTimer->szName);
3968 }
3969
3970 PDMCritSectLeave(pVM, &pQueue->TimerLock);
3971 PDMCritSectRwLeaveShared(pVM, &pQueue->AllocLock);
3972 }
3973}
3974
3975
3976/**
3977 * Display all clocks.
3978 *
3979 * @param pVM The cross context VM structure.
3980 * @param pHlp The info helpers.
3981 * @param pszArgs Arguments, ignored.
3982 */
3983static DECLCALLBACK(void) tmR3InfoClocks(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3984{
3985 NOREF(pszArgs);
3986
3987 /*
3988 * Read the times first to avoid more than necessary time variation.
3989 */
3990 const uint64_t u64Virtual = TMVirtualGet(pVM);
3991 const uint64_t u64VirtualSync = TMVirtualSyncGet(pVM);
3992 const uint64_t u64Real = TMRealGet(pVM);
3993
3994 for (VMCPUID i = 0; i < pVM->cCpus; i++)
3995 {
3996 PVMCPU pVCpu = pVM->apCpusR3[i];
3997 uint64_t u64TSC = TMCpuTickGet(pVCpu);
3998
3999 /*
4000 * TSC
4001 */
4002 pHlp->pfnPrintf(pHlp,
4003 "Cpu Tick: %18RU64 (%#016RX64) %RU64Hz %s - virtualized",
4004 u64TSC, u64TSC, TMCpuTicksPerSecond(pVM),
4005 pVCpu->tm.s.fTSCTicking ? "ticking" : "paused");
4006 if (pVM->tm.s.enmTSCMode == TMTSCMODE_REAL_TSC_OFFSET)
4007 {
4008 pHlp->pfnPrintf(pHlp, " - real tsc offset");
4009 if (pVCpu->tm.s.offTSCRawSrc)
4010 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVCpu->tm.s.offTSCRawSrc);
4011 }
4012 else if (pVM->tm.s.enmTSCMode == TMTSCMODE_NATIVE_API)
4013 pHlp->pfnPrintf(pHlp, " - native api");
4014 else
4015 pHlp->pfnPrintf(pHlp, " - virtual clock");
4016 pHlp->pfnPrintf(pHlp, "\n");
4017 }
4018
4019 /*
4020 * virtual
4021 */
4022 pHlp->pfnPrintf(pHlp,
4023 " Virtual: %18RU64 (%#016RX64) %RU64Hz %s",
4024 u64Virtual, u64Virtual, TMVirtualGetFreq(pVM),
4025 pVM->tm.s.cVirtualTicking ? "ticking" : "paused");
4026 if (pVM->tm.s.fVirtualWarpDrive)
4027 pHlp->pfnPrintf(pHlp, " WarpDrive %RU32 %%", pVM->tm.s.u32VirtualWarpDrivePercentage);
4028 pHlp->pfnPrintf(pHlp, "\n");
4029
4030 /*
4031 * virtual sync
4032 */
4033 pHlp->pfnPrintf(pHlp,
4034 "VirtSync: %18RU64 (%#016RX64) %s%s",
4035 u64VirtualSync, u64VirtualSync,
4036 pVM->tm.s.fVirtualSyncTicking ? "ticking" : "paused",
4037 pVM->tm.s.fVirtualSyncCatchUp ? " - catchup" : "");
4038 if (pVM->tm.s.offVirtualSync)
4039 {
4040 pHlp->pfnPrintf(pHlp, "\n offset %RU64", pVM->tm.s.offVirtualSync);
4041 if (pVM->tm.s.u32VirtualSyncCatchUpPercentage)
4042 pHlp->pfnPrintf(pHlp, " catch-up rate %u %%", pVM->tm.s.u32VirtualSyncCatchUpPercentage);
4043 }
4044 pHlp->pfnPrintf(pHlp, "\n");
4045
4046 /*
4047 * real
4048 */
4049 pHlp->pfnPrintf(pHlp,
4050 " Real: %18RU64 (%#016RX64) %RU64Hz\n",
4051 u64Real, u64Real, TMRealGetFreq(pVM));
4052}
4053
4054
4055/**
4056 * Helper for tmR3InfoCpuLoad that adjust @a uPct to the given graph width.
4057 */
4058DECLINLINE(size_t) tmR3InfoCpuLoadAdjustWidth(size_t uPct, size_t cchWidth)
4059{
4060 if (cchWidth != 100)
4061 uPct = (size_t)(((double)uPct + 0.5) * ((double)cchWidth / 100.0));
4062 return uPct;
4063}
4064
4065
4066/**
4067 * @callback_method_impl{FNDBGFINFOARGVINT}
4068 */
4069static DECLCALLBACK(void) tmR3InfoCpuLoad(PVM pVM, PCDBGFINFOHLP pHlp, int cArgs, char **papszArgs)
4070{
4071 char szTmp[1024];
4072
4073 /*
4074 * Parse arguments.
4075 */
4076 PTMCPULOADSTATE pState = &pVM->tm.s.CpuLoad;
4077 VMCPUID idCpu = 0;
4078 bool fAllCpus = true;
4079 bool fExpGraph = true;
4080 uint32_t cchWidth = 80;
4081 uint32_t cPeriods = RT_ELEMENTS(pState->aHistory);
4082 uint32_t cRows = 60;
4083
4084 static const RTGETOPTDEF s_aOptions[] =
4085 {
4086 { "all", 'a', RTGETOPT_REQ_NOTHING },
4087 { "cpu", 'c', RTGETOPT_REQ_UINT32 },
4088 { "periods", 'p', RTGETOPT_REQ_UINT32 },
4089 { "rows", 'r', RTGETOPT_REQ_UINT32 },
4090 { "uni", 'u', RTGETOPT_REQ_NOTHING },
4091 { "uniform", 'u', RTGETOPT_REQ_NOTHING },
4092 { "width", 'w', RTGETOPT_REQ_UINT32 },
4093 { "exp", 'x', RTGETOPT_REQ_NOTHING },
4094 { "exponential", 'x', RTGETOPT_REQ_NOTHING },
4095 };
4096
4097 RTGETOPTSTATE State;
4098 int rc = RTGetOptInit(&State, cArgs, papszArgs, s_aOptions, RT_ELEMENTS(s_aOptions), 0, 0 /*fFlags*/);
4099 AssertRC(rc);
4100
4101 RTGETOPTUNION ValueUnion;
4102 while ((rc = RTGetOpt(&State, &ValueUnion)) != 0)
4103 {
4104 switch (rc)
4105 {
4106 case 'a':
4107 pState = &pVM->apCpusR3[0]->tm.s.CpuLoad;
4108 idCpu = 0;
4109 fAllCpus = true;
4110 break;
4111 case 'c':
4112 if (ValueUnion.u32 < pVM->cCpus)
4113 {
4114 pState = &pVM->apCpusR3[ValueUnion.u32]->tm.s.CpuLoad;
4115 idCpu = ValueUnion.u32;
4116 }
4117 else
4118 {
4119 pState = &pVM->tm.s.CpuLoad;
4120 idCpu = VMCPUID_ALL;
4121 }
4122 fAllCpus = false;
4123 break;
4124 case 'p':
4125 cPeriods = RT_MIN(RT_MAX(ValueUnion.u32, 1), RT_ELEMENTS(pState->aHistory));
4126 break;
4127 case 'r':
4128 cRows = RT_MIN(RT_MAX(ValueUnion.u32, 5), RT_ELEMENTS(pState->aHistory));
4129 break;
4130 case 'w':
4131 cchWidth = RT_MIN(RT_MAX(ValueUnion.u32, 10), sizeof(szTmp) - 32);
4132 break;
4133 case 'x':
4134 fExpGraph = true;
4135 break;
4136 case 'u':
4137 fExpGraph = false;
4138 break;
4139 case 'h':
4140 pHlp->pfnPrintf(pHlp,
4141 "Usage: cpuload [parameters]\n"
4142 " all, -a\n"
4143 " Show statistics for all CPUs. (default)\n"
4144 " cpu=id, -c id\n"
4145 " Show statistics for the specified CPU ID. Show combined stats if out of range.\n"
4146 " periods=count, -p count\n"
4147 " Number of periods to show. Default: all\n"
4148 " rows=count, -r count\n"
4149 " Number of rows in the graphs. Default: 60\n"
4150 " width=count, -w count\n"
4151 " Core graph width in characters. Default: 80\n"
4152 " exp, exponential, -e\n"
4153 " Do 1:1 for more recent half / 30 seconds of the graph, combine the\n"
4154 " rest into increasinly larger chunks. Default.\n"
4155 " uniform, uni, -u\n"
4156 " Combine periods into rows in a uniform manner for the whole graph.\n");
4157 return;
4158 default:
4159 pHlp->pfnGetOptError(pHlp, rc, &ValueUnion, &State);
4160 return;
4161 }
4162 }
4163
4164 /*
4165 * Do the job.
4166 */
4167 for (;;)
4168 {
4169 uint32_t const cMaxPeriods = pState->cHistoryEntries;
4170 if (cPeriods > cMaxPeriods)
4171 cPeriods = cMaxPeriods;
4172 if (cPeriods > 0)
4173 {
4174 if (fAllCpus)
4175 {
4176 if (idCpu > 0)
4177 pHlp->pfnPrintf(pHlp, "\n");
4178 pHlp->pfnPrintf(pHlp, " CPU load for virtual CPU %#04x\n"
4179 " -------------------------------\n", idCpu);
4180 }
4181
4182 /*
4183 * Figure number of periods per chunk. We can either do this in a linear
4184 * fashion or a exponential fashion that compresses old history more.
4185 */
4186 size_t cPerRowDecrement = 0;
4187 size_t cPeriodsPerRow = 1;
4188 if (cRows < cPeriods)
4189 {
4190 if (!fExpGraph)
4191 cPeriodsPerRow = (cPeriods + cRows / 2) / cRows;
4192 else
4193 {
4194 /* The last 30 seconds or half of the rows are 1:1, the other part
4195 is in increasing period counts. Code is a little simple but seems
4196 to do the job most of the time, which is all I have time now. */
4197 size_t cPeriodsOneToOne = RT_MIN(30, cRows / 2);
4198 size_t cRestRows = cRows - cPeriodsOneToOne;
4199 size_t cRestPeriods = cPeriods - cPeriodsOneToOne;
4200
4201 size_t cPeriodsInWindow = 0;
4202 for (cPeriodsPerRow = 0; cPeriodsPerRow <= cRestRows && cPeriodsInWindow < cRestPeriods; cPeriodsPerRow++)
4203 cPeriodsInWindow += cPeriodsPerRow + 1;
4204
4205 size_t iLower = 1;
4206 while (cPeriodsInWindow < cRestPeriods)
4207 {
4208 cPeriodsPerRow++;
4209 cPeriodsInWindow += cPeriodsPerRow;
4210 cPeriodsInWindow -= iLower;
4211 iLower++;
4212 }
4213
4214 cPerRowDecrement = 1;
4215 }
4216 }
4217
4218 /*
4219 * Do the work.
4220 */
4221 size_t cPctExecuting = 0;
4222 size_t cPctOther = 0;
4223 size_t cPeriodsAccumulated = 0;
4224
4225 size_t cRowsLeft = cRows;
4226 size_t iHistory = (pState->idxHistory - cPeriods) % RT_ELEMENTS(pState->aHistory);
4227 while (cPeriods-- > 0)
4228 {
4229 iHistory++;
4230 if (iHistory >= RT_ELEMENTS(pState->aHistory))
4231 iHistory = 0;
4232
4233 cPctExecuting += pState->aHistory[iHistory].cPctExecuting;
4234 cPctOther += pState->aHistory[iHistory].cPctOther;
4235 cPeriodsAccumulated += 1;
4236 if ( cPeriodsAccumulated >= cPeriodsPerRow
4237 || cPeriods < cRowsLeft)
4238 {
4239 /*
4240 * Format and output the line.
4241 */
4242 size_t offTmp = 0;
4243 size_t i = tmR3InfoCpuLoadAdjustWidth(cPctExecuting / cPeriodsAccumulated, cchWidth);
4244 while (i-- > 0)
4245 szTmp[offTmp++] = '#';
4246 i = tmR3InfoCpuLoadAdjustWidth(cPctOther / cPeriodsAccumulated, cchWidth);
4247 while (i-- > 0)
4248 szTmp[offTmp++] = 'O';
4249 szTmp[offTmp] = '\0';
4250
4251 cRowsLeft--;
4252 pHlp->pfnPrintf(pHlp, "%3zus: %s\n", cPeriods + cPeriodsAccumulated / 2, szTmp);
4253
4254 /* Reset the state: */
4255 cPctExecuting = 0;
4256 cPctOther = 0;
4257 cPeriodsAccumulated = 0;
4258 if (cPeriodsPerRow > cPerRowDecrement)
4259 cPeriodsPerRow -= cPerRowDecrement;
4260 }
4261 }
4262 pHlp->pfnPrintf(pHlp, " (#=guest, O=VMM overhead) idCpu=%#x\n", idCpu);
4263
4264 }
4265 else
4266 pHlp->pfnPrintf(pHlp, "No load data.\n");
4267
4268 /*
4269 * Next CPU if we're display all.
4270 */
4271 if (!fAllCpus)
4272 break;
4273 idCpu++;
4274 if (idCpu >= pVM->cCpus)
4275 break;
4276 pState = &pVM->apCpusR3[idCpu]->tm.s.CpuLoad;
4277 }
4278
4279}
4280
4281
4282/**
4283 * Gets the descriptive TM TSC mode name given the enum value.
4284 *
4285 * @returns The name.
4286 * @param enmMode The mode to name.
4287 */
4288static const char *tmR3GetTSCModeNameEx(TMTSCMODE enmMode)
4289{
4290 switch (enmMode)
4291 {
4292 case TMTSCMODE_REAL_TSC_OFFSET: return "RealTSCOffset";
4293 case TMTSCMODE_VIRT_TSC_EMULATED: return "VirtTSCEmulated";
4294 case TMTSCMODE_DYNAMIC: return "Dynamic";
4295 case TMTSCMODE_NATIVE_API: return "NativeApi";
4296 default: return "???";
4297 }
4298}
4299
4300
4301/**
4302 * Gets the descriptive TM TSC mode name.
4303 *
4304 * @returns The name.
4305 * @param pVM The cross context VM structure.
4306 */
4307static const char *tmR3GetTSCModeName(PVM pVM)
4308{
4309 Assert(pVM);
4310 return tmR3GetTSCModeNameEx(pVM->tm.s.enmTSCMode);
4311}
4312
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette