VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/DBGFR3Bp.cpp@ 106952

Last change on this file since 106952 was 106362, checked in by vboxsync, 5 weeks ago

VMM/DBGF: Prepare DBGF to support ARMv8/A64 style breakpoints for the VMM debugger. This converts the x86 centric int3 naming to software breakpoint, bugref:10393

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 107.5 KB
Line 
1/* $Id: DBGFR3Bp.cpp 106362 2024-10-16 13:08:09Z vboxsync $ */
2/** @file
3 * DBGF - Debugger Facility, Breakpoint Management.
4 */
5
6/*
7 * Copyright (C) 2006-2024 Oracle and/or its affiliates.
8 *
9 * This file is part of VirtualBox base platform packages, as
10 * available from https://www.virtualbox.org.
11 *
12 * This program is free software; you can redistribute it and/or
13 * modify it under the terms of the GNU General Public License
14 * as published by the Free Software Foundation, in version 3 of the
15 * License.
16 *
17 * This program is distributed in the hope that it will be useful, but
18 * WITHOUT ANY WARRANTY; without even the implied warranty of
19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
20 * General Public License for more details.
21 *
22 * You should have received a copy of the GNU General Public License
23 * along with this program; if not, see <https://www.gnu.org/licenses>.
24 *
25 * SPDX-License-Identifier: GPL-3.0-only
26 */
27
28
29/** @page pg_dbgf_bp DBGF - The Debugger Facility, Breakpoint Management
30 *
31 * The debugger facilities breakpoint managers purpose is to efficiently manage
32 * large amounts of breakpoints for various use cases like dtrace like operations
33 * or execution flow tracing for instance. Especially execution flow tracing can
34 * require thousands of breakpoints which need to be managed efficiently to not slow
35 * down guest operation too much. Before the rewrite starting end of 2020, DBGF could
36 * only handle 32 breakpoints (+ 4 hardware assisted breakpoints). The new
37 * manager is supposed to be able to handle up to one million breakpoints.
38 *
39 * @see grp_dbgf
40 *
41 *
42 * @section sec_dbgf_bp_owner Breakpoint owners
43 *
44 * A single breakpoint owner has a mandatory ring-3 callback and an optional ring-0
45 * callback assigned which is called whenever a breakpoint with the owner assigned is hit.
46 * The common part of the owner is managed by a single table mapped into both ring-0
47 * and ring-3 and the handle being the index into the table. This allows resolving
48 * the handle to the internal structure efficiently. Searching for a free entry is
49 * done using a bitmap indicating free and occupied entries. For the optional
50 * ring-0 owner part there is a separate ring-0 only table for security reasons.
51 *
52 * The callback of the owner can be used to gather and log guest state information
53 * and decide whether to continue guest execution or stop and drop into the debugger.
54 * Breakpoints which don't have an owner assigned will always drop the VM right into
55 * the debugger.
56 *
57 *
58 * @section sec_dbgf_bp_bps Breakpoints
59 *
60 * Breakpoints are referenced by an opaque handle which acts as an index into a global table
61 * mapped into ring-3 and ring-0. Each entry contains the necessary state to manage the breakpoint
62 * like trigger conditions, type, owner, etc. If an owner is given an optional opaque user argument
63 * can be supplied which is passed in the respective owner callback. For owners with ring-0 callbacks
64 * a dedicated ring-0 table is held saving possible ring-0 user arguments.
65 *
66 * To keep memory consumption under control and still support large amounts of
67 * breakpoints the table is split into fixed sized chunks and the chunk index and index
68 * into the chunk can be derived from the handle with only a few logical operations.
69 *
70 *
71 * @section sec_dbgf_bp_resolv Resolving breakpoint addresses
72 *
73 * Whenever a \#BP(0) event is triggered DBGF needs to decide whether the event originated
74 * from within the guest or whether a DBGF breakpoint caused it. This has to happen as fast
75 * as possible. The following scheme is employed to achieve this:
76 *
77 * @verbatim
78 * 7 6 5 4 3 2 1 0
79 * +---+---+---+---+---+---+---+---+
80 * | | | | | | | | | BP address
81 * +---+---+---+---+---+---+---+---+
82 * \_____________________/ \_____/
83 * | |
84 * | +---------------+
85 * | |
86 * BP table | v
87 * +------------+ | +-----------+
88 * | hBp 0 | | X <- | 0 | xxxxx |
89 * | hBp 1 | <----------------+------------------------ | 1 | hBp 1 |
90 * | | | +--- | 2 | idxL2 |
91 * | hBp <m> | <---+ v | |...| ... |
92 * | | | +-----------+ | |...| ... |
93 * | | | | | | |...| ... |
94 * | hBp <n> | <-+ +----- | +> leaf | | | . |
95 * | | | | | | | | . |
96 * | | | | + root + | <------------+ | . |
97 * | | | | | | +-----------+
98 * | | +------- | leaf<+ | L1: 65536
99 * | . | | . |
100 * | . | | . |
101 * | . | | . |
102 * +------------+ +-----------+
103 * L2 idx BST
104 * @endverbatim
105 *
106 * -# Take the lowest 16 bits of the breakpoint address and use it as an direct index
107 * into the L1 table. The L1 table is contiguous and consists of 4 byte entries
108 * resulting in 256KiB of memory used. The topmost 4 bits indicate how to proceed
109 * and the meaning of the remaining 28bits depends on the topmost 4 bits:
110 * - A 0 type entry means no breakpoint is registered with the matching lowest 16bits,
111 * so forward the event to the guest.
112 * - A 1 in the topmost 4 bits means that the remaining 28bits directly denote a breakpoint
113 * handle which can be resolved by extracting the chunk index and index into the chunk
114 * of the global breakpoint table. If the address matches the breakpoint is processed
115 * according to the configuration. Otherwise the breakpoint is again forwarded to the guest.
116 * - A 2 in the topmost 4 bits means that there are multiple breakpoints registered
117 * matching the lowest 16bits and the search must continue in the L2 table with the
118 * remaining 28bits acting as an index into the L2 table indicating the search root.
119 * -# The L2 table consists of multiple index based binary search trees, there is one for each reference
120 * from the L1 table. The key for the table are the upper 6 bytes of the breakpoint address
121 * used for searching. This tree is traversed until either a matching address is found and
122 * the breakpoint is being processed or again forwarded to the guest if it isn't successful.
123 * Each entry in the L2 table is 16 bytes big and densly packed to avoid excessive memory usage.
124 *
125 * @section sec_dbgf_bp_ioport Handling I/O port breakpoints
126 *
127 * Because of the limited amount of I/O ports being available (65536) a single table with 65536 entries,
128 * each 4 byte big will be allocated. This amounts to 256KiB of memory being used additionally as soon as
129 * an I/O breakpoint is enabled. The entries contain the breakpoint handle directly allowing only one breakpoint
130 * per port right now, which is something we accept as a limitation right now to keep things relatively simple.
131 * When there is at least one I/O breakpoint active IOM will be notified and it will afterwards call the DBGF API
132 * whenever the guest does an I/O port access to decide whether a breakpoint was hit. This keeps the overhead small
133 * when there is no I/O port breakpoint enabled.
134 *
135 * @section sec_dbgf_bp_note Random thoughts and notes for the implementation
136 *
137 * - The assumption for this approach is that the lowest 16bits of the breakpoint address are
138 * hopefully the ones being the most varying ones across breakpoints so the traversal
139 * can skip the L2 table in most of the cases. Even if the L2 table must be taken the
140 * individual trees should be quite shallow resulting in low overhead when walking it
141 * (though only real world testing can assert this assumption).
142 * - Index based tables and trees are used instead of pointers because the tables
143 * are always mapped into ring-0 and ring-3 with different base addresses.
144 * - Efficent breakpoint allocation is done by having a global bitmap indicating free
145 * and occupied breakpoint entries. Same applies for the L2 BST table.
146 * - Special care must be taken when modifying the L1 and L2 tables as other EMTs
147 * might still access it (want to try a lockless approach first using
148 * atomic updates, have to resort to locking if that turns out to be too difficult).
149 * - Each BP entry is supposed to be 64 byte big and each chunk should contain 65536
150 * breakpoints which results in 4MiB for each chunk plus the allocation bitmap.
151 * - ring-0 has to take special care when traversing the L2 BST to not run into cycles
152 * and do strict bounds checking before accessing anything. The L1 and L2 table
153 * are written to from ring-3 only. Same goes for the breakpoint table with the
154 * exception being the opaque user argument for ring-0 which is stored in ring-0 only
155 * memory.
156 */
157
158
159/*********************************************************************************************************************************
160* Header Files *
161*********************************************************************************************************************************/
162#define LOG_GROUP LOG_GROUP_DBGF
163#define VMCPU_INCL_CPUM_GST_CTX
164#include <VBox/vmm/cpum.h>
165#include <VBox/vmm/dbgf.h>
166#include <VBox/vmm/selm.h>
167#include <VBox/vmm/iem.h>
168#include <VBox/vmm/mm.h>
169#include <VBox/vmm/iom.h>
170#include <VBox/vmm/hm.h>
171#include "DBGFInternal.h"
172#include <VBox/vmm/vm.h>
173#include <VBox/vmm/uvm.h>
174
175#include <VBox/err.h>
176#include <VBox/log.h>
177#include <iprt/assert.h>
178#include <iprt/mem.h>
179#if defined(VBOX_VMM_TARGET_ARMV8)
180# include <iprt/armv8.h>
181#endif
182
183#include "DBGFInline.h"
184
185
186/*********************************************************************************************************************************
187* Structures and Typedefs *
188*********************************************************************************************************************************/
189
190
191/*********************************************************************************************************************************
192* Internal Functions *
193*********************************************************************************************************************************/
194RT_C_DECLS_BEGIN
195RT_C_DECLS_END
196
197
198/**
199 * Initialize the breakpoint mangement.
200 *
201 * @returns VBox status code.
202 * @param pUVM The user mode VM handle.
203 */
204DECLHIDDEN(int) dbgfR3BpInit(PUVM pUVM)
205{
206 PVM pVM = pUVM->pVM;
207
208 //pUVM->dbgf.s.paBpOwnersR3 = NULL;
209 //pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
210
211 /* Init hardware breakpoint states. */
212 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
213 {
214 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
215
216 AssertCompileSize(DBGFBP, sizeof(uint32_t));
217 pHwBp->hBp = NIL_DBGFBP;
218 //pHwBp->fEnabled = false;
219 }
220
221 /* Now the global breakpoint table chunks. */
222 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
223 {
224 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
225
226 //pBpChunk->pBpBaseR3 = NULL;
227 //pBpChunk->pbmAlloc = NULL;
228 //pBpChunk->cBpsFree = 0;
229 pBpChunk->idChunk = DBGF_BP_CHUNK_ID_INVALID; /* Not allocated. */
230 }
231
232 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
233 {
234 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
235
236 //pL2Chunk->pL2BaseR3 = NULL;
237 //pL2Chunk->pbmAlloc = NULL;
238 //pL2Chunk->cFree = 0;
239 pL2Chunk->idChunk = DBGF_BP_CHUNK_ID_INVALID; /* Not allocated. */
240 }
241
242 //pUVM->dbgf.s.paBpLocL1R3 = NULL;
243 //pUVM->dbgf.s.paBpLocPortIoR3 = NULL;
244 pUVM->dbgf.s.hMtxBpL2Wr = NIL_RTSEMFASTMUTEX;
245 return RTSemFastMutexCreate(&pUVM->dbgf.s.hMtxBpL2Wr);
246}
247
248
249/**
250 * Terminates the breakpoint mangement.
251 *
252 * @returns VBox status code.
253 * @param pUVM The user mode VM handle.
254 */
255DECLHIDDEN(int) dbgfR3BpTerm(PUVM pUVM)
256{
257 if (pUVM->dbgf.s.pbmBpOwnersAllocR3)
258 {
259 RTMemFree((void *)pUVM->dbgf.s.pbmBpOwnersAllocR3);
260 pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
261 }
262
263 /* Free all allocated chunk bitmaps (the chunks itself are destroyed during ring-0 VM destruction). */
264 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
265 {
266 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
267
268 if (pBpChunk->idChunk != DBGF_BP_CHUNK_ID_INVALID)
269 {
270 AssertPtr(pBpChunk->pbmAlloc);
271 RTMemFree((void *)pBpChunk->pbmAlloc);
272 pBpChunk->pbmAlloc = NULL;
273 pBpChunk->idChunk = DBGF_BP_CHUNK_ID_INVALID;
274 }
275 }
276
277 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
278 {
279 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
280
281 if (pL2Chunk->idChunk != DBGF_BP_CHUNK_ID_INVALID)
282 {
283 AssertPtr(pL2Chunk->pbmAlloc);
284 RTMemFree((void *)pL2Chunk->pbmAlloc);
285 pL2Chunk->pbmAlloc = NULL;
286 pL2Chunk->idChunk = DBGF_BP_CHUNK_ID_INVALID;
287 }
288 }
289
290 if (pUVM->dbgf.s.hMtxBpL2Wr != NIL_RTSEMFASTMUTEX)
291 {
292 RTSemFastMutexDestroy(pUVM->dbgf.s.hMtxBpL2Wr);
293 pUVM->dbgf.s.hMtxBpL2Wr = NIL_RTSEMFASTMUTEX;
294 }
295
296 return VINF_SUCCESS;
297}
298
299
300/**
301 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
302 */
303static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
304{
305 RT_NOREF(pvUser);
306
307 VMCPU_ASSERT_EMT(pVCpu);
308 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
309
310 /*
311 * The initialization will be done on EMT(0). It is possible that multiple
312 * initialization attempts are done because dbgfR3BpEnsureInit() can be called
313 * from racing non EMT threads when trying to set a breakpoint for the first time.
314 * Just fake success if the L1 is already present which means that a previous rendezvous
315 * successfully initialized the breakpoint manager.
316 */
317 PUVM pUVM = pVM->pUVM;
318 if ( pVCpu->idCpu == 0
319 && !pUVM->dbgf.s.paBpLocL1R3)
320 {
321 if (!SUPR3IsDriverless())
322 {
323 DBGFBPINITREQ Req;
324 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
325 Req.Hdr.cbReq = sizeof(Req);
326 Req.paBpLocL1R3 = NULL;
327 int rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_INIT, 0 /*u64Arg*/, &Req.Hdr);
328 AssertLogRelMsgRCReturn(rc, ("VMMR0_DO_DBGF_BP_INIT failed: %Rrc\n", rc), rc);
329 pUVM->dbgf.s.paBpLocL1R3 = Req.paBpLocL1R3;
330 }
331 else
332 {
333 /* Driverless: Do dbgfR0BpInitWorker here, ring-3 style. */
334 uint32_t const cbL1Loc = RT_ALIGN_32(UINT16_MAX * sizeof(uint32_t), HOST_PAGE_SIZE);
335 pUVM->dbgf.s.paBpLocL1R3 = (uint32_t *)RTMemPageAllocZ(cbL1Loc);
336 AssertLogRelMsgReturn(pUVM->dbgf.s.paBpLocL1R3, ("cbL1Loc=%#x\n", cbL1Loc), VERR_NO_PAGE_MEMORY);
337 }
338 }
339
340 return VINF_SUCCESS;
341}
342
343
344/**
345 * Ensures that the breakpoint manager is fully initialized.
346 *
347 * @returns VBox status code.
348 * @param pUVM The user mode VM handle.
349 *
350 * @thread Any thread.
351 */
352static int dbgfR3BpEnsureInit(PUVM pUVM)
353{
354 /* If the L1 lookup table is allocated initialization succeeded before. */
355 if (RT_LIKELY(pUVM->dbgf.s.paBpLocL1R3))
356 return VINF_SUCCESS;
357
358 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
359 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpInitEmtWorker, NULL /*pvUser*/);
360}
361
362
363/**
364 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
365 */
366static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpPortIoInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
367{
368 RT_NOREF(pvUser);
369
370 VMCPU_ASSERT_EMT(pVCpu);
371 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
372
373 /*
374 * The initialization will be done on EMT(0). It is possible that multiple
375 * initialization attempts are done because dbgfR3BpPortIoEnsureInit() can be called
376 * from racing non EMT threads when trying to set a breakpoint for the first time.
377 * Just fake success if the L1 is already present which means that a previous rendezvous
378 * successfully initialized the breakpoint manager.
379 */
380 PUVM pUVM = pVM->pUVM;
381 if ( pVCpu->idCpu == 0
382 && !pUVM->dbgf.s.paBpLocPortIoR3)
383 {
384 if (!SUPR3IsDriverless())
385 {
386 DBGFBPINITREQ Req;
387 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
388 Req.Hdr.cbReq = sizeof(Req);
389 Req.paBpLocL1R3 = NULL;
390 int rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_PORTIO_INIT, 0 /*u64Arg*/, &Req.Hdr);
391 AssertLogRelMsgRCReturn(rc, ("VMMR0_DO_DBGF_BP_PORTIO_INIT failed: %Rrc\n", rc), rc);
392 pUVM->dbgf.s.paBpLocPortIoR3 = Req.paBpLocL1R3;
393 }
394 else
395 {
396 /* Driverless: Do dbgfR0BpPortIoInitWorker here, ring-3 style. */
397 uint32_t const cbPortIoLoc = RT_ALIGN_32(UINT16_MAX * sizeof(uint32_t), HOST_PAGE_SIZE);
398 pUVM->dbgf.s.paBpLocPortIoR3 = (uint32_t *)RTMemPageAllocZ(cbPortIoLoc);
399 AssertLogRelMsgReturn(pUVM->dbgf.s.paBpLocPortIoR3, ("cbPortIoLoc=%#x\n", cbPortIoLoc), VERR_NO_PAGE_MEMORY);
400 }
401 }
402
403 return VINF_SUCCESS;
404}
405
406
407/**
408 * Ensures that the breakpoint manager is initialized to handle I/O port breakpoint.
409 *
410 * @returns VBox status code.
411 * @param pUVM The user mode VM handle.
412 *
413 * @thread Any thread.
414 */
415static int dbgfR3BpPortIoEnsureInit(PUVM pUVM)
416{
417 /* If the L1 lookup table is allocated initialization succeeded before. */
418 if (RT_LIKELY(pUVM->dbgf.s.paBpLocPortIoR3))
419 return VINF_SUCCESS;
420
421 /* Ensure that the breakpoint manager is initialized. */
422 int rc = dbgfR3BpEnsureInit(pUVM);
423 if (RT_FAILURE(rc))
424 return rc;
425
426 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
427 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpPortIoInitEmtWorker, NULL /*pvUser*/);
428}
429
430
431/**
432 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
433 */
434static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpOwnerInitEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
435{
436 RT_NOREF(pvUser);
437
438 VMCPU_ASSERT_EMT(pVCpu);
439 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
440
441 /*
442 * The initialization will be done on EMT(0). It is possible that multiple
443 * initialization attempts are done because dbgfR3BpOwnerEnsureInit() can be called
444 * from racing non EMT threads when trying to create a breakpoint owner for the first time.
445 * Just fake success if the pointers are initialized already, meaning that a previous rendezvous
446 * successfully initialized the breakpoint owner table.
447 */
448 int rc = VINF_SUCCESS;
449 PUVM pUVM = pVM->pUVM;
450 if ( pVCpu->idCpu == 0
451 && !pUVM->dbgf.s.pbmBpOwnersAllocR3)
452 {
453 AssertCompile(!(DBGF_BP_OWNER_COUNT_MAX % 64));
454 pUVM->dbgf.s.pbmBpOwnersAllocR3 = RTMemAllocZ(DBGF_BP_OWNER_COUNT_MAX / 8);
455 if (pUVM->dbgf.s.pbmBpOwnersAllocR3)
456 {
457 if (!SUPR3IsDriverless())
458 {
459 DBGFBPOWNERINITREQ Req;
460 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
461 Req.Hdr.cbReq = sizeof(Req);
462 Req.paBpOwnerR3 = NULL;
463 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_OWNER_INIT, 0 /*u64Arg*/, &Req.Hdr);
464 if (RT_SUCCESS(rc))
465 {
466 pUVM->dbgf.s.paBpOwnersR3 = (PDBGFBPOWNERINT)Req.paBpOwnerR3;
467 return VINF_SUCCESS;
468 }
469 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_OWNER_INIT failed: %Rrc\n", rc));
470 }
471 else
472 {
473 /* Driverless: Do dbgfR0BpOwnerInitWorker here, ring-3 style. */
474 uint32_t const cbBpOwnerR3 = RT_ALIGN_32(DBGF_BP_OWNER_COUNT_MAX * sizeof(DBGFBPOWNERINT), HOST_PAGE_SIZE);
475 pUVM->dbgf.s.paBpLocPortIoR3 = (uint32_t *)RTMemPageAllocZ(cbBpOwnerR3);
476 if (pUVM->dbgf.s.paBpLocPortIoR3)
477 return VINF_SUCCESS;
478 AssertLogRelMsgFailed(("cbBpOwnerR3=%#x\n", cbBpOwnerR3));
479 rc = VERR_NO_PAGE_MEMORY;
480 }
481
482 RTMemFree((void *)pUVM->dbgf.s.pbmBpOwnersAllocR3);
483 pUVM->dbgf.s.pbmBpOwnersAllocR3 = NULL;
484 }
485 else
486 rc = VERR_NO_MEMORY;
487 }
488
489 return rc;
490}
491
492
493/**
494 * Ensures that the breakpoint manager is fully initialized.
495 *
496 * @returns VBox status code.
497 * @param pUVM The user mode VM handle.
498 *
499 * @thread Any thread.
500 */
501static int dbgfR3BpOwnerEnsureInit(PUVM pUVM)
502{
503 /* If the allocation bitmap is allocated initialization succeeded before. */
504 if (RT_LIKELY(pUVM->dbgf.s.pbmBpOwnersAllocR3))
505 return VINF_SUCCESS;
506
507 /* Gather all EMTs and call into ring-0 to initialize the breakpoint manager. */
508 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpOwnerInitEmtWorker, NULL /*pvUser*/);
509}
510
511
512/**
513 * Retains the given breakpoint owner handle for use.
514 *
515 * @returns VBox status code.
516 * @retval VERR_INVALID_HANDLE if the given breakpoint owner handle is invalid.
517 * @param pUVM The user mode VM handle.
518 * @param hBpOwner The breakpoint owner handle to retain, NIL_DBGFOWNER is accepted without doing anything.
519 * @param fIo Flag whether the owner must have the I/O handler set because it used by an I/O breakpoint.
520 */
521DECLINLINE(int) dbgfR3BpOwnerRetain(PUVM pUVM, DBGFBPOWNER hBpOwner, bool fIo)
522{
523 if (hBpOwner == NIL_DBGFBPOWNER)
524 return VINF_SUCCESS;
525
526 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
527 if (pBpOwner)
528 {
529 AssertReturn ( ( fIo
530 && pBpOwner->pfnBpIoHitR3)
531 || ( !fIo
532 && pBpOwner->pfnBpHitR3),
533 VERR_INVALID_HANDLE);
534 ASMAtomicIncU32(&pBpOwner->cRefs);
535 return VINF_SUCCESS;
536 }
537
538 return VERR_INVALID_HANDLE;
539}
540
541
542/**
543 * Releases the given breakpoint owner handle.
544 *
545 * @returns VBox status code.
546 * @retval VERR_INVALID_HANDLE if the given breakpoint owner handle is invalid.
547 * @param pUVM The user mode VM handle.
548 * @param hBpOwner The breakpoint owner handle to retain, NIL_DBGFOWNER is accepted without doing anything.
549 */
550DECLINLINE(int) dbgfR3BpOwnerRelease(PUVM pUVM, DBGFBPOWNER hBpOwner)
551{
552 if (hBpOwner == NIL_DBGFBPOWNER)
553 return VINF_SUCCESS;
554
555 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
556 if (pBpOwner)
557 {
558 Assert(pBpOwner->cRefs > 1);
559 ASMAtomicDecU32(&pBpOwner->cRefs);
560 return VINF_SUCCESS;
561 }
562
563 return VERR_INVALID_HANDLE;
564}
565
566
567/**
568 * Returns the internal breakpoint state for the given handle.
569 *
570 * @returns Pointer to the internal breakpoint state or NULL if the handle is invalid.
571 * @param pUVM The user mode VM handle.
572 * @param hBp The breakpoint handle to resolve.
573 */
574DECLINLINE(PDBGFBPINT) dbgfR3BpGetByHnd(PUVM pUVM, DBGFBP hBp)
575{
576 uint32_t idChunk = DBGF_BP_HND_GET_CHUNK_ID(hBp);
577 uint32_t idxEntry = DBGF_BP_HND_GET_ENTRY(hBp);
578
579 AssertReturn(idChunk < DBGF_BP_CHUNK_COUNT, NULL);
580 AssertReturn(idxEntry < DBGF_BP_COUNT_PER_CHUNK, NULL);
581
582 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
583 AssertReturn(pBpChunk->idChunk == idChunk, NULL);
584 AssertPtrReturn(pBpChunk->pbmAlloc, NULL);
585 AssertReturn(ASMBitTest(pBpChunk->pbmAlloc, idxEntry), NULL);
586
587 return &pBpChunk->pBpBaseR3[idxEntry];
588}
589
590
591/**
592 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
593 */
594static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpChunkAllocEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
595{
596 uint32_t idChunk = (uint32_t)(uintptr_t)pvUser;
597
598 VMCPU_ASSERT_EMT(pVCpu);
599 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
600
601 AssertReturn(idChunk < DBGF_BP_CHUNK_COUNT, VERR_DBGF_BP_IPE_1);
602
603 PUVM pUVM = pVM->pUVM;
604 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
605
606 AssertReturn( pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID
607 || pBpChunk->idChunk == idChunk,
608 VERR_DBGF_BP_IPE_2);
609
610 /*
611 * The initialization will be done on EMT(0). It is possible that multiple
612 * allocation attempts are done when multiple racing non EMT threads try to
613 * allocate a breakpoint and a new chunk needs to be allocated.
614 * Ignore the request and succeed if the chunk is allocated meaning that a
615 * previous rendezvous successfully allocated the chunk.
616 */
617 int rc = VINF_SUCCESS;
618 if ( pVCpu->idCpu == 0
619 && pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID)
620 {
621 /* Allocate the bitmap first so we can skip calling into VMMR0 if it fails. */
622 AssertCompile(!(DBGF_BP_COUNT_PER_CHUNK % 64));
623 void *pbmAlloc = RTMemAllocZ(DBGF_BP_COUNT_PER_CHUNK / 8);
624 if (RT_LIKELY(pbmAlloc))
625 {
626 if (!SUPR3IsDriverless())
627 {
628 DBGFBPCHUNKALLOCREQ Req;
629 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
630 Req.Hdr.cbReq = sizeof(Req);
631 Req.idChunk = idChunk;
632 Req.pChunkBaseR3 = NULL;
633 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_CHUNK_ALLOC, 0 /*u64Arg*/, &Req.Hdr);
634 if (RT_SUCCESS(rc))
635 pBpChunk->pBpBaseR3 = (PDBGFBPINT)Req.pChunkBaseR3;
636 else
637 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_CHUNK_ALLOC failed: %Rrc\n", rc));
638 }
639 else
640 {
641 /* Driverless: Do dbgfR0BpChunkAllocWorker here, ring-3 style. */
642 uint32_t const cbShared = RT_ALIGN_32(DBGF_BP_COUNT_PER_CHUNK * sizeof(DBGFBPINT), HOST_PAGE_SIZE);
643 pBpChunk->pBpBaseR3 = (PDBGFBPINT)RTMemPageAllocZ(cbShared);
644 AssertLogRelMsgStmt(pBpChunk->pBpBaseR3, ("cbShared=%#x\n", cbShared), rc = VERR_NO_PAGE_MEMORY);
645 }
646 if (RT_SUCCESS(rc))
647 {
648 pBpChunk->pbmAlloc = (void volatile *)pbmAlloc;
649 pBpChunk->cBpsFree = DBGF_BP_COUNT_PER_CHUNK;
650 pBpChunk->idChunk = idChunk;
651 return VINF_SUCCESS;
652 }
653
654 RTMemFree(pbmAlloc);
655 }
656 else
657 rc = VERR_NO_MEMORY;
658 }
659
660 return rc;
661}
662
663
664/**
665 * Tries to allocate the given chunk which requires an EMT rendezvous.
666 *
667 * @returns VBox status code.
668 * @param pUVM The user mode VM handle.
669 * @param idChunk The chunk to allocate.
670 *
671 * @thread Any thread.
672 */
673DECLINLINE(int) dbgfR3BpChunkAlloc(PUVM pUVM, uint32_t idChunk)
674{
675 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpChunkAllocEmtWorker, (void *)(uintptr_t)idChunk);
676}
677
678
679/**
680 * Tries to allocate a new breakpoint of the given type.
681 *
682 * @returns VBox status code.
683 * @param pUVM The user mode VM handle.
684 * @param hOwner The owner handle, NIL_DBGFBPOWNER if none assigned.
685 * @param pvUser Opaque user data passed in the owner callback.
686 * @param enmType Breakpoint type to allocate.
687 * @param fFlags Flags assoicated with the allocated breakpoint.
688 * @param iHitTrigger The hit count at which the breakpoint start triggering.
689 * Use 0 (or 1) if it's gonna trigger at once.
690 * @param iHitDisable The hit count which disables the breakpoint.
691 * Use ~(uint64_t) if it's never gonna be disabled.
692 * @param phBp Where to return the opaque breakpoint handle on success.
693 * @param ppBp Where to return the pointer to the internal breakpoint state on success.
694 *
695 * @thread Any thread.
696 */
697static int dbgfR3BpAlloc(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser, DBGFBPTYPE enmType,
698 uint16_t fFlags, uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp,
699 PDBGFBPINT *ppBp)
700{
701 bool fIo = enmType == DBGFBPTYPE_PORT_IO
702 || enmType == DBGFBPTYPE_MMIO;
703 int rc = dbgfR3BpOwnerRetain(pUVM, hOwner, fIo);
704 if (RT_FAILURE(rc))
705 return rc;
706
707 /*
708 * Search for a chunk having a free entry, allocating new chunks
709 * if the encountered ones are full.
710 *
711 * This can be called from multiple threads at the same time so special care
712 * has to be taken to not require any locking here.
713 */
714 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); i++)
715 {
716 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[i];
717
718 uint32_t idChunk = ASMAtomicReadU32(&pBpChunk->idChunk);
719 if (idChunk == DBGF_BP_CHUNK_ID_INVALID)
720 {
721 rc = dbgfR3BpChunkAlloc(pUVM, i);
722 if (RT_FAILURE(rc))
723 {
724 LogRel(("DBGF/Bp: Allocating new breakpoint table chunk failed with %Rrc\n", rc));
725 break;
726 }
727
728 idChunk = ASMAtomicReadU32(&pBpChunk->idChunk);
729 Assert(idChunk == i);
730 }
731
732 /** @todo Optimize with some hinting if this turns out to be too slow. */
733 for (;;)
734 {
735 uint32_t cBpsFree = ASMAtomicReadU32(&pBpChunk->cBpsFree);
736 if (cBpsFree)
737 {
738 /*
739 * Scan the associated bitmap for a free entry, if none can be found another thread
740 * raced us and we go to the next chunk.
741 */
742 int32_t iClr = ASMBitFirstClear(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK);
743 if (iClr != -1)
744 {
745 /*
746 * Try to allocate, we could get raced here as well. In that case
747 * we try again.
748 */
749 if (!ASMAtomicBitTestAndSet(pBpChunk->pbmAlloc, iClr))
750 {
751 /* Success, immediately mark as allocated, initialize the breakpoint state and return. */
752 ASMAtomicDecU32(&pBpChunk->cBpsFree);
753
754 PDBGFBPINT pBp = &pBpChunk->pBpBaseR3[iClr];
755 pBp->Pub.cHits = 0;
756 pBp->Pub.iHitTrigger = iHitTrigger;
757 pBp->Pub.iHitDisable = iHitDisable;
758 pBp->Pub.hOwner = hOwner;
759 pBp->Pub.u16Type = DBGF_BP_PUB_MAKE_TYPE(enmType);
760 pBp->Pub.fFlags = fFlags & ~DBGF_BP_F_ENABLED; /* The enabled flag is handled in the respective APIs. */
761 pBp->pvUserR3 = pvUser;
762
763 /** @todo Owner handling (reference and call ring-0 if it has an ring-0 callback). */
764
765 *phBp = DBGF_BP_HND_CREATE(idChunk, iClr);
766 *ppBp = pBp;
767 return VINF_SUCCESS;
768 }
769 /* else Retry with another spot. */
770 }
771 else /* no free entry in bitmap, go to the next chunk */
772 break;
773 }
774 else /* !cBpsFree, go to the next chunk */
775 break;
776 }
777 }
778
779 rc = dbgfR3BpOwnerRelease(pUVM, hOwner); AssertRC(rc);
780 return VERR_DBGF_NO_MORE_BP_SLOTS;
781}
782
783
784/**
785 * Frees the given breakpoint handle.
786 *
787 * @param pUVM The user mode VM handle.
788 * @param hBp The breakpoint handle to free.
789 * @param pBp The internal breakpoint state pointer.
790 */
791static void dbgfR3BpFree(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
792{
793 uint32_t idChunk = DBGF_BP_HND_GET_CHUNK_ID(hBp);
794 uint32_t idxEntry = DBGF_BP_HND_GET_ENTRY(hBp);
795
796 AssertReturnVoid(idChunk < DBGF_BP_CHUNK_COUNT);
797 AssertReturnVoid(idxEntry < DBGF_BP_COUNT_PER_CHUNK);
798
799 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
800 AssertPtrReturnVoid(pBpChunk->pbmAlloc);
801 AssertReturnVoid(ASMBitTest(pBpChunk->pbmAlloc, idxEntry));
802
803 /** @todo Need a trip to Ring-0 if an owner is assigned with a Ring-0 part to clear the breakpoint. */
804 int rc = dbgfR3BpOwnerRelease(pUVM, pBp->Pub.hOwner); AssertRC(rc); RT_NOREF(rc);
805 memset(pBp, 0, sizeof(*pBp));
806
807 ASMAtomicBitClear(pBpChunk->pbmAlloc, idxEntry);
808 ASMAtomicIncU32(&pBpChunk->cBpsFree);
809}
810
811
812/**
813 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
814 */
815static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpL2TblChunkAllocEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
816{
817 uint32_t idChunk = (uint32_t)(uintptr_t)pvUser;
818
819 VMCPU_ASSERT_EMT(pVCpu);
820 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
821
822 AssertReturn(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT, VERR_DBGF_BP_IPE_1);
823
824 PUVM pUVM = pVM->pUVM;
825 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
826
827 AssertReturn( pL2Chunk->idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID
828 || pL2Chunk->idChunk == idChunk,
829 VERR_DBGF_BP_IPE_2);
830
831 /*
832 * The initialization will be done on EMT(0). It is possible that multiple
833 * allocation attempts are done when multiple racing non EMT threads try to
834 * allocate a breakpoint and a new chunk needs to be allocated.
835 * Ignore the request and succeed if the chunk is allocated meaning that a
836 * previous rendezvous successfully allocated the chunk.
837 */
838 int rc = VINF_SUCCESS;
839 if ( pVCpu->idCpu == 0
840 && pL2Chunk->idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID)
841 {
842 /* Allocate the bitmap first so we can skip calling into VMMR0 if it fails. */
843 AssertCompile(!(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK % 64));
844 void *pbmAlloc = RTMemAllocZ(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK / 8);
845 if (RT_LIKELY(pbmAlloc))
846 {
847 if (!SUPR3IsDriverless())
848 {
849 DBGFBPL2TBLCHUNKALLOCREQ Req;
850 Req.Hdr.u32Magic = SUPVMMR0REQHDR_MAGIC;
851 Req.Hdr.cbReq = sizeof(Req);
852 Req.idChunk = idChunk;
853 Req.pChunkBaseR3 = NULL;
854 rc = VMMR3CallR0Emt(pVM, pVCpu, VMMR0_DO_DBGF_BP_L2_TBL_CHUNK_ALLOC, 0 /*u64Arg*/, &Req.Hdr);
855 if (RT_SUCCESS(rc))
856 pL2Chunk->pL2BaseR3 = (PDBGFBPL2ENTRY)Req.pChunkBaseR3;
857 else
858 AssertLogRelMsgRC(rc, ("VMMR0_DO_DBGF_BP_L2_TBL_CHUNK_ALLOC failed: %Rrc\n", rc));
859 }
860 else
861 {
862 /* Driverless: Do dbgfR0BpL2TblChunkAllocWorker here, ring-3 style. */
863 uint32_t const cbTotal = RT_ALIGN_32(DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK * sizeof(DBGFBPL2ENTRY), HOST_PAGE_SIZE);
864 pL2Chunk->pL2BaseR3 = (PDBGFBPL2ENTRY)RTMemPageAllocZ(cbTotal);
865 AssertLogRelMsgStmt(pL2Chunk->pL2BaseR3, ("cbTotal=%#x\n", cbTotal), rc = VERR_NO_PAGE_MEMORY);
866 }
867 if (RT_SUCCESS(rc))
868 {
869 pL2Chunk->pbmAlloc = (void volatile *)pbmAlloc;
870 pL2Chunk->cFree = DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK;
871 pL2Chunk->idChunk = idChunk;
872 return VINF_SUCCESS;
873 }
874
875 RTMemFree(pbmAlloc);
876 }
877 else
878 rc = VERR_NO_MEMORY;
879 }
880
881 return rc;
882}
883
884
885/**
886 * Tries to allocate the given L2 table chunk which requires an EMT rendezvous.
887 *
888 * @returns VBox status code.
889 * @param pUVM The user mode VM handle.
890 * @param idChunk The chunk to allocate.
891 *
892 * @thread Any thread.
893 */
894DECLINLINE(int) dbgfR3BpL2TblChunkAlloc(PUVM pUVM, uint32_t idChunk)
895{
896 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpL2TblChunkAllocEmtWorker, (void *)(uintptr_t)idChunk);
897}
898
899
900/**
901 * Tries to allocate a new breakpoint of the given type.
902 *
903 * @returns VBox status code.
904 * @param pUVM The user mode VM handle.
905 * @param pidxL2Tbl Where to return the L2 table entry index on success.
906 * @param ppL2TblEntry Where to return the pointer to the L2 table entry on success.
907 *
908 * @thread Any thread.
909 */
910static int dbgfR3BpL2TblEntryAlloc(PUVM pUVM, uint32_t *pidxL2Tbl, PDBGFBPL2ENTRY *ppL2TblEntry)
911{
912 /*
913 * Search for a chunk having a free entry, allocating new chunks
914 * if the encountered ones are full.
915 *
916 * This can be called from multiple threads at the same time so special care
917 * has to be taken to not require any locking here.
918 */
919 for (uint32_t i = 0; i < RT_ELEMENTS(pUVM->dbgf.s.aBpL2TblChunks); i++)
920 {
921 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[i];
922
923 uint32_t idChunk = ASMAtomicReadU32(&pL2Chunk->idChunk);
924 if (idChunk == DBGF_BP_L2_IDX_CHUNK_ID_INVALID)
925 {
926 int rc = dbgfR3BpL2TblChunkAlloc(pUVM, i);
927 if (RT_FAILURE(rc))
928 {
929 LogRel(("DBGF/Bp: Allocating new breakpoint L2 lookup table chunk failed with %Rrc\n", rc));
930 break;
931 }
932
933 idChunk = ASMAtomicReadU32(&pL2Chunk->idChunk);
934 Assert(idChunk == i);
935 }
936
937 /** @todo Optimize with some hinting if this turns out to be too slow. */
938 for (;;)
939 {
940 uint32_t cFree = ASMAtomicReadU32(&pL2Chunk->cFree);
941 if (cFree)
942 {
943 /*
944 * Scan the associated bitmap for a free entry, if none can be found another thread
945 * raced us and we go to the next chunk.
946 */
947 int32_t iClr = ASMBitFirstClear(pL2Chunk->pbmAlloc, DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK);
948 if (iClr != -1)
949 {
950 /*
951 * Try to allocate, we could get raced here as well. In that case
952 * we try again.
953 */
954 if (!ASMAtomicBitTestAndSet(pL2Chunk->pbmAlloc, iClr))
955 {
956 /* Success, immediately mark as allocated, initialize the breakpoint state and return. */
957 ASMAtomicDecU32(&pL2Chunk->cFree);
958
959 PDBGFBPL2ENTRY pL2Entry = &pL2Chunk->pL2BaseR3[iClr];
960
961 *pidxL2Tbl = DBGF_BP_L2_IDX_CREATE(idChunk, iClr);
962 *ppL2TblEntry = pL2Entry;
963 return VINF_SUCCESS;
964 }
965 /* else Retry with another spot. */
966 }
967 else /* no free entry in bitmap, go to the next chunk */
968 break;
969 }
970 else /* !cFree, go to the next chunk */
971 break;
972 }
973 }
974
975 return VERR_DBGF_NO_MORE_BP_SLOTS;
976}
977
978
979/**
980 * Frees the given breakpoint handle.
981 *
982 * @param pUVM The user mode VM handle.
983 * @param idxL2Tbl The L2 table index to free.
984 * @param pL2TblEntry The L2 table entry pointer to free.
985 */
986static void dbgfR3BpL2TblEntryFree(PUVM pUVM, uint32_t idxL2Tbl, PDBGFBPL2ENTRY pL2TblEntry)
987{
988 uint32_t idChunk = DBGF_BP_L2_IDX_GET_CHUNK_ID(idxL2Tbl);
989 uint32_t idxEntry = DBGF_BP_L2_IDX_GET_ENTRY(idxL2Tbl);
990
991 AssertReturnVoid(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT);
992 AssertReturnVoid(idxEntry < DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK);
993
994 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
995 AssertPtrReturnVoid(pL2Chunk->pbmAlloc);
996 AssertReturnVoid(ASMBitTest(pL2Chunk->pbmAlloc, idxEntry));
997
998 memset(pL2TblEntry, 0, sizeof(*pL2TblEntry));
999
1000 ASMAtomicBitClear(pL2Chunk->pbmAlloc, idxEntry);
1001 ASMAtomicIncU32(&pL2Chunk->cFree);
1002}
1003
1004
1005/**
1006 * Sets the enabled flag of the given breakpoint to the given value.
1007 *
1008 * @param pBp The breakpoint to set the state.
1009 * @param fEnabled Enabled status.
1010 */
1011DECLINLINE(void) dbgfR3BpSetEnabled(PDBGFBPINT pBp, bool fEnabled)
1012{
1013 if (fEnabled)
1014 pBp->Pub.fFlags |= DBGF_BP_F_ENABLED;
1015 else
1016 pBp->Pub.fFlags &= ~DBGF_BP_F_ENABLED;
1017}
1018
1019
1020/**
1021 * Assigns a hardware breakpoint state to the given register breakpoint.
1022 *
1023 * @returns VBox status code.
1024 * @param pVM The cross-context VM structure pointer.
1025 * @param hBp The breakpoint handle to assign.
1026 * @param pBp The internal breakpoint state.
1027 *
1028 * @thread Any thread.
1029 */
1030static int dbgfR3BpRegAssign(PVM pVM, DBGFBP hBp, PDBGFBPINT pBp)
1031{
1032 AssertReturn(pBp->Pub.u.Reg.iReg == UINT8_MAX, VERR_DBGF_BP_IPE_3);
1033
1034 for (uint8_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
1035 {
1036 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
1037
1038 AssertCompileSize(DBGFBP, sizeof(uint32_t));
1039 if (ASMAtomicCmpXchgU32(&pHwBp->hBp, hBp, NIL_DBGFBP))
1040 {
1041 pHwBp->GCPtr = pBp->Pub.u.Reg.GCPtr;
1042 pHwBp->fType = pBp->Pub.u.Reg.fType;
1043 pHwBp->cb = pBp->Pub.u.Reg.cb;
1044 pHwBp->fEnabled = DBGF_BP_PUB_IS_ENABLED(&pBp->Pub);
1045
1046 pBp->Pub.u.Reg.iReg = i;
1047 return VINF_SUCCESS;
1048 }
1049 }
1050
1051 return VERR_DBGF_NO_MORE_BP_SLOTS;
1052}
1053
1054
1055/**
1056 * Removes the assigned hardware breakpoint state from the given register breakpoint.
1057 *
1058 * @returns VBox status code.
1059 * @param pVM The cross-context VM structure pointer.
1060 * @param hBp The breakpoint handle to remove.
1061 * @param pBp The internal breakpoint state.
1062 *
1063 * @thread Any thread.
1064 */
1065static int dbgfR3BpRegRemove(PVM pVM, DBGFBP hBp, PDBGFBPINT pBp)
1066{
1067 AssertReturn(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints), VERR_DBGF_BP_IPE_3);
1068
1069 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1070 AssertReturn(pHwBp->hBp == hBp, VERR_DBGF_BP_IPE_4);
1071 AssertReturn(!pHwBp->fEnabled, VERR_DBGF_BP_IPE_5);
1072
1073 pHwBp->GCPtr = 0;
1074 pHwBp->fType = 0;
1075 pHwBp->cb = 0;
1076 ASMCompilerBarrier();
1077
1078 ASMAtomicWriteU32(&pHwBp->hBp, NIL_DBGFBP);
1079 return VINF_SUCCESS;
1080}
1081
1082
1083/**
1084 * Returns the pointer to the L2 table entry from the given index.
1085 *
1086 * @returns Current context pointer to the L2 table entry or NULL if the provided index value is invalid.
1087 * @param pUVM The user mode VM handle.
1088 * @param idxL2 The L2 table index to resolve.
1089 *
1090 * @note The content of the resolved L2 table entry is not validated!.
1091 */
1092DECLINLINE(PDBGFBPL2ENTRY) dbgfR3BpL2GetByIdx(PUVM pUVM, uint32_t idxL2)
1093{
1094 uint32_t idChunk = DBGF_BP_L2_IDX_GET_CHUNK_ID(idxL2);
1095 uint32_t idxEntry = DBGF_BP_L2_IDX_GET_ENTRY(idxL2);
1096
1097 AssertReturn(idChunk < DBGF_BP_L2_TBL_CHUNK_COUNT, NULL);
1098 AssertReturn(idxEntry < DBGF_BP_L2_TBL_ENTRIES_PER_CHUNK, NULL);
1099
1100 PDBGFBPL2TBLCHUNKR3 pL2Chunk = &pUVM->dbgf.s.aBpL2TblChunks[idChunk];
1101 AssertPtrReturn(pL2Chunk->pbmAlloc, NULL);
1102 AssertReturn(ASMBitTest(pL2Chunk->pbmAlloc, idxEntry), NULL);
1103
1104 return &pL2Chunk->CTX_SUFF(pL2Base)[idxEntry];
1105}
1106
1107
1108/**
1109 * Creates a binary search tree with the given root and leaf nodes.
1110 *
1111 * @returns VBox status code.
1112 * @param pUVM The user mode VM handle.
1113 * @param idxL1 The index into the L1 table where the created tree should be linked into.
1114 * @param u32EntryOld The old entry in the L1 table used to compare with in the atomic update.
1115 * @param hBpRoot The root node DBGF handle to assign.
1116 * @param GCPtrRoot The root nodes GC pointer to use as a key.
1117 * @param hBpLeaf The leafs node DBGF handle to assign.
1118 * @param GCPtrLeaf The leafs node GC pointer to use as a key.
1119 */
1120static int dbgfR3BpInt3L2BstCreate(PUVM pUVM, uint32_t idxL1, uint32_t u32EntryOld,
1121 DBGFBP hBpRoot, RTGCUINTPTR GCPtrRoot,
1122 DBGFBP hBpLeaf, RTGCUINTPTR GCPtrLeaf)
1123{
1124 AssertReturn(GCPtrRoot != GCPtrLeaf, VERR_DBGF_BP_IPE_9);
1125 Assert(DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtrRoot) == DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtrLeaf));
1126
1127 /* Allocate two nodes. */
1128 uint32_t idxL2Root = 0;
1129 PDBGFBPL2ENTRY pL2Root = NULL;
1130 int rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Root, &pL2Root);
1131 if (RT_SUCCESS(rc))
1132 {
1133 uint32_t idxL2Leaf = 0;
1134 PDBGFBPL2ENTRY pL2Leaf = NULL;
1135 rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Leaf, &pL2Leaf);
1136 if (RT_SUCCESS(rc))
1137 {
1138 dbgfBpL2TblEntryInit(pL2Leaf, hBpLeaf, GCPtrLeaf, DBGF_BP_L2_ENTRY_IDX_END, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1139 if (GCPtrLeaf < GCPtrRoot)
1140 dbgfBpL2TblEntryInit(pL2Root, hBpRoot, GCPtrRoot, idxL2Leaf, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1141 else
1142 dbgfBpL2TblEntryInit(pL2Root, hBpRoot, GCPtrRoot, DBGF_BP_L2_ENTRY_IDX_END, idxL2Leaf, 0 /*iDepth*/);
1143
1144 uint32_t const u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_L2_IDX(idxL2Root);
1145 if (ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], u32Entry, u32EntryOld))
1146 return VINF_SUCCESS;
1147
1148 /* The L1 entry has changed due to another thread racing us during insertion, free nodes and try again. */
1149 dbgfR3BpL2TblEntryFree(pUVM, idxL2Leaf, pL2Leaf);
1150 rc = VINF_TRY_AGAIN;
1151 }
1152
1153 dbgfR3BpL2TblEntryFree(pUVM, idxL2Root, pL2Root);
1154 }
1155
1156 return rc;
1157}
1158
1159
1160/**
1161 * Inserts the given breakpoint handle into an existing binary search tree.
1162 *
1163 * @returns VBox status code.
1164 * @param pUVM The user mode VM handle.
1165 * @param idxL2Root The index of the tree root in the L2 table.
1166 * @param hBp The node DBGF handle to insert.
1167 * @param GCPtr The nodes GC pointer to use as a key.
1168 */
1169static int dbgfR3BpInt2L2BstNodeInsert(PUVM pUVM, uint32_t idxL2Root, DBGFBP hBp, RTGCUINTPTR GCPtr)
1170{
1171 GCPtr = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1172
1173 /* Allocate a new node first. */
1174 uint32_t idxL2Nd = 0;
1175 PDBGFBPL2ENTRY pL2Nd = NULL;
1176 int rc = dbgfR3BpL2TblEntryAlloc(pUVM, &idxL2Nd, &pL2Nd);
1177 if (RT_SUCCESS(rc))
1178 {
1179 /* Walk the tree and find the correct node to insert to. */
1180 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Root);
1181 while (RT_LIKELY(pL2Entry))
1182 {
1183 /* Make a copy of the entry. */
1184 DBGFBPL2ENTRY L2Entry;
1185 L2Entry.u64GCPtrKeyAndBpHnd1 = ASMAtomicReadU64(&pL2Entry->u64GCPtrKeyAndBpHnd1);
1186 L2Entry.u64LeftRightIdxDepthBpHnd2 = ASMAtomicReadU64(&pL2Entry->u64LeftRightIdxDepthBpHnd2);
1187
1188 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(L2Entry.u64GCPtrKeyAndBpHnd1);
1189 AssertBreak(GCPtr != GCPtrL2Entry);
1190
1191 /* Not found, get to the next level. */
1192 uint32_t idxL2Next = GCPtr < GCPtrL2Entry
1193 ? DBGF_BP_L2_ENTRY_GET_IDX_LEFT(L2Entry.u64LeftRightIdxDepthBpHnd2)
1194 : DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(L2Entry.u64LeftRightIdxDepthBpHnd2);
1195 if (idxL2Next == DBGF_BP_L2_ENTRY_IDX_END)
1196 {
1197 /* Insert the new node here. */
1198 dbgfBpL2TblEntryInit(pL2Nd, hBp, GCPtr, DBGF_BP_L2_ENTRY_IDX_END, DBGF_BP_L2_ENTRY_IDX_END, 0 /*iDepth*/);
1199 if (GCPtr < GCPtrL2Entry)
1200 dbgfBpL2TblEntryUpdateLeft(pL2Entry, idxL2Next, 0 /*iDepth*/);
1201 else
1202 dbgfBpL2TblEntryUpdateRight(pL2Entry, idxL2Next, 0 /*iDepth*/);
1203 return VINF_SUCCESS;
1204 }
1205
1206 pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Next);
1207 }
1208
1209 dbgfR3BpL2TblEntryFree(pUVM, idxL2Nd, pL2Nd);
1210 rc = VERR_DBGF_BP_L2_LOOKUP_FAILED;
1211 }
1212
1213 return rc;
1214}
1215
1216
1217/**
1218 * Adds the given breakpoint handle keyed with the GC pointer to the proper L2 binary search tree
1219 * possibly creating a new tree.
1220 *
1221 * @returns VBox status code.
1222 * @param pUVM The user mode VM handle.
1223 * @param idxL1 The index into the L1 table the breakpoint uses.
1224 * @param hBp The breakpoint handle which is to be added.
1225 * @param GCPtr The GC pointer the breakpoint is keyed with.
1226 */
1227static int dbgfR3BpInt3L2BstNodeAdd(PUVM pUVM, uint32_t idxL1, DBGFBP hBp, RTGCUINTPTR GCPtr)
1228{
1229 int rc = RTSemFastMutexRequest(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc);
1230
1231 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]); /* Re-read, could get raced by a remove operation. */
1232 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1233 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1234 {
1235 /* Create a new search tree, gather the necessary information first. */
1236 DBGFBP hBp2 = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32Entry);
1237 PDBGFBPINT pBp2 = dbgfR3BpGetByHnd(pUVM, hBp2);
1238 AssertStmt(RT_VALID_PTR(pBp2), rc = VERR_DBGF_BP_IPE_7);
1239 if (RT_SUCCESS(rc))
1240 rc = dbgfR3BpInt3L2BstCreate(pUVM, idxL1, u32Entry, hBp, GCPtr, hBp2, pBp2->Pub.u.Sw.GCPtr);
1241 }
1242 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1243 rc = dbgfR3BpInt2L2BstNodeInsert(pUVM, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry), hBp, GCPtr);
1244
1245 int rc2 = RTSemFastMutexRelease(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc2);
1246 return rc;
1247}
1248
1249
1250/**
1251 * Gets the leftmost from the given tree node start index.
1252 *
1253 * @returns VBox status code.
1254 * @param pUVM The user mode VM handle.
1255 * @param idxL2Start The start index to walk from.
1256 * @param pidxL2Leftmost Where to store the L2 table index of the leftmost entry.
1257 * @param ppL2NdLeftmost Where to store the pointer to the leftmost L2 table entry.
1258 * @param pidxL2NdLeftParent Where to store the L2 table index of the leftmost entries parent.
1259 * @param ppL2NdLeftParent Where to store the pointer to the leftmost L2 table entries parent.
1260 */
1261static int dbgfR33BpInt3BstGetLeftmostEntryFromNode(PUVM pUVM, uint32_t idxL2Start,
1262 uint32_t *pidxL2Leftmost, PDBGFBPL2ENTRY *ppL2NdLeftmost,
1263 uint32_t *pidxL2NdLeftParent, PDBGFBPL2ENTRY *ppL2NdLeftParent)
1264{
1265 uint32_t idxL2Parent = DBGF_BP_L2_ENTRY_IDX_END;
1266 PDBGFBPL2ENTRY pL2NdParent = NULL;
1267
1268 for (;;)
1269 {
1270 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Start);
1271 AssertPtr(pL2Entry);
1272
1273 uint32_t idxL2Left = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1274 if (idxL2Start == DBGF_BP_L2_ENTRY_IDX_END)
1275 {
1276 *pidxL2Leftmost = idxL2Start;
1277 *ppL2NdLeftmost = pL2Entry;
1278 *pidxL2NdLeftParent = idxL2Parent;
1279 *ppL2NdLeftParent = pL2NdParent;
1280 break;
1281 }
1282
1283 idxL2Parent = idxL2Start;
1284 idxL2Start = idxL2Left;
1285 pL2NdParent = pL2Entry;
1286 }
1287
1288 return VINF_SUCCESS;
1289}
1290
1291
1292/**
1293 * Removes the given node rearranging the tree.
1294 *
1295 * @returns VBox status code.
1296 * @param pUVM The user mode VM handle.
1297 * @param idxL1 The index into the L1 table pointing to the binary search tree containing the node.
1298 * @param idxL2Root The L2 table index where the tree root is located.
1299 * @param idxL2Nd The node index to remove.
1300 * @param pL2Nd The L2 table entry to remove.
1301 * @param idxL2NdParent The parents index, can be DBGF_BP_L2_ENTRY_IDX_END if the root is about to be removed.
1302 * @param pL2NdParent The parents L2 table entry, can be NULL if the root is about to be removed.
1303 * @param fLeftChild Flag whether the node is the left child of the parent or the right one.
1304 */
1305static int dbgfR3BpInt3BstNodeRemove(PUVM pUVM, uint32_t idxL1, uint32_t idxL2Root,
1306 uint32_t idxL2Nd, PDBGFBPL2ENTRY pL2Nd,
1307 uint32_t idxL2NdParent, PDBGFBPL2ENTRY pL2NdParent,
1308 bool fLeftChild)
1309{
1310 /*
1311 * If there are only two nodes remaining the tree will get destroyed and the
1312 * L1 entry will be converted to the direct handle type.
1313 */
1314 uint32_t idxL2Left = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1315 uint32_t idxL2Right = DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1316
1317 Assert(idxL2NdParent != DBGF_BP_L2_ENTRY_IDX_END || !pL2NdParent); RT_NOREF(idxL2NdParent);
1318 uint32_t idxL2ParentNew = DBGF_BP_L2_ENTRY_IDX_END;
1319 if (idxL2Right == DBGF_BP_L2_ENTRY_IDX_END)
1320 idxL2ParentNew = idxL2Left;
1321 else
1322 {
1323 /* Find the leftmost entry of the right subtree and move it to the to be removed nodes location in the tree. */
1324 PDBGFBPL2ENTRY pL2NdLeftmostParent = NULL;
1325 PDBGFBPL2ENTRY pL2NdLeftmost = NULL;
1326 uint32_t idxL2NdLeftmostParent = DBGF_BP_L2_ENTRY_IDX_END;
1327 uint32_t idxL2Leftmost = DBGF_BP_L2_ENTRY_IDX_END;
1328 int rc = dbgfR33BpInt3BstGetLeftmostEntryFromNode(pUVM, idxL2Right, &idxL2Leftmost ,&pL2NdLeftmost,
1329 &idxL2NdLeftmostParent, &pL2NdLeftmostParent);
1330 AssertRCReturn(rc, rc);
1331
1332 if (pL2NdLeftmostParent)
1333 {
1334 /* Rearrange the leftmost entries parents pointer. */
1335 dbgfBpL2TblEntryUpdateLeft(pL2NdLeftmostParent, DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2NdLeftmost->u64LeftRightIdxDepthBpHnd2), 0 /*iDepth*/);
1336 dbgfBpL2TblEntryUpdateRight(pL2NdLeftmost, idxL2Right, 0 /*iDepth*/);
1337 }
1338
1339 dbgfBpL2TblEntryUpdateLeft(pL2NdLeftmost, idxL2Left, 0 /*iDepth*/);
1340
1341 /* Update the remove nodes parent to point to the new node. */
1342 idxL2ParentNew = idxL2Leftmost;
1343 }
1344
1345 if (pL2NdParent)
1346 {
1347 /* Asssign the new L2 index to proper parents left or right pointer. */
1348 if (fLeftChild)
1349 dbgfBpL2TblEntryUpdateLeft(pL2NdParent, idxL2ParentNew, 0 /*iDepth*/);
1350 else
1351 dbgfBpL2TblEntryUpdateRight(pL2NdParent, idxL2ParentNew, 0 /*iDepth*/);
1352 }
1353 else
1354 {
1355 /* The root node is removed, set the new root in the L1 table. */
1356 Assert(idxL2ParentNew != DBGF_BP_L2_ENTRY_IDX_END);
1357 idxL2Root = idxL2ParentNew;
1358 ASMAtomicXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_CREATE_L2_IDX(idxL2Left));
1359 }
1360
1361 /* Free the node. */
1362 dbgfR3BpL2TblEntryFree(pUVM, idxL2Nd, pL2Nd);
1363
1364 /*
1365 * Check whether the old/new root is the only node remaining and convert the L1
1366 * table entry to a direct breakpoint handle one in that case.
1367 */
1368 pL2Nd = dbgfR3BpL2GetByIdx(pUVM, idxL2Root);
1369 AssertPtr(pL2Nd);
1370 if ( DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2) == DBGF_BP_L2_ENTRY_IDX_END
1371 && DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2) == DBGF_BP_L2_ENTRY_IDX_END)
1372 {
1373 DBGFBP hBp = DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Nd->u64GCPtrKeyAndBpHnd1, pL2Nd->u64LeftRightIdxDepthBpHnd2);
1374 dbgfR3BpL2TblEntryFree(pUVM, idxL2Root, pL2Nd);
1375 ASMAtomicXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp));
1376 }
1377
1378 return VINF_SUCCESS;
1379}
1380
1381
1382/**
1383 * Removes the given breakpoint handle keyed with the GC pointer from the L2 binary search tree
1384 * pointed to by the given L2 root index.
1385 *
1386 * @returns VBox status code.
1387 * @param pUVM The user mode VM handle.
1388 * @param idxL1 The index into the L1 table pointing to the binary search tree.
1389 * @param idxL2Root The L2 table index where the tree root is located.
1390 * @param hBp The breakpoint handle which is to be removed.
1391 * @param GCPtr The GC pointer the breakpoint is keyed with.
1392 */
1393static int dbgfR3BpInt3L2BstRemove(PUVM pUVM, uint32_t idxL1, uint32_t idxL2Root, DBGFBP hBp, RTGCUINTPTR GCPtr)
1394{
1395 GCPtr = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1396
1397 int rc = RTSemFastMutexRequest(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc);
1398
1399 uint32_t idxL2Cur = idxL2Root;
1400 uint32_t idxL2Parent = DBGF_BP_L2_ENTRY_IDX_END;
1401 bool fLeftChild = false;
1402 PDBGFBPL2ENTRY pL2EntryParent = NULL;
1403 for (;;)
1404 {
1405 PDBGFBPL2ENTRY pL2Entry = dbgfR3BpL2GetByIdx(pUVM, idxL2Cur);
1406 AssertPtr(pL2Entry);
1407
1408 /* Check whether this node is to be removed.. */
1409 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(pL2Entry->u64GCPtrKeyAndBpHnd1);
1410 if (GCPtrL2Entry == GCPtr)
1411 {
1412 Assert(DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Entry->u64GCPtrKeyAndBpHnd1, pL2Entry->u64LeftRightIdxDepthBpHnd2) == hBp); RT_NOREF(hBp);
1413
1414 rc = dbgfR3BpInt3BstNodeRemove(pUVM, idxL1, idxL2Root, idxL2Cur, pL2Entry, idxL2Parent, pL2EntryParent, fLeftChild);
1415 break;
1416 }
1417
1418 pL2EntryParent = pL2Entry;
1419 idxL2Parent = idxL2Cur;
1420
1421 if (GCPtrL2Entry < GCPtr)
1422 {
1423 fLeftChild = true;
1424 idxL2Cur = DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1425 }
1426 else
1427 {
1428 fLeftChild = false;
1429 idxL2Cur = DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Entry->u64LeftRightIdxDepthBpHnd2);
1430 }
1431
1432 AssertBreakStmt(idxL2Cur != DBGF_BP_L2_ENTRY_IDX_END, rc = VERR_DBGF_BP_L2_LOOKUP_FAILED);
1433 }
1434
1435 int rc2 = RTSemFastMutexRelease(pUVM->dbgf.s.hMtxBpL2Wr); AssertRC(rc2);
1436
1437 return rc;
1438}
1439
1440
1441/**
1442 * Adds the given int3 breakpoint to the appropriate lookup tables.
1443 *
1444 * @returns VBox status code.
1445 * @param pUVM The user mode VM handle.
1446 * @param hBp The breakpoint handle to add.
1447 * @param pBp The internal breakpoint state.
1448 */
1449static int dbgfR3BpInt3Add(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1450{
1451 AssertReturn(DBGF_BP_PUB_GET_TYPE(&pBp->Pub) == DBGFBPTYPE_SOFTWARE, VERR_DBGF_BP_IPE_3);
1452
1453 int rc = VINF_SUCCESS;
1454 uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(pBp->Pub.u.Sw.GCPtr);
1455 uint8_t cTries = 16;
1456
1457 while (cTries--)
1458 {
1459 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1460 if (u32Entry == DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1461 {
1462 /*
1463 * No breakpoint assigned so far for this entry, create an entry containing
1464 * the direct breakpoint handle and try to exchange it atomically.
1465 */
1466 u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp);
1467 if (ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], u32Entry, DBGF_BP_INT3_L1_ENTRY_TYPE_NULL))
1468 break;
1469 }
1470 else
1471 {
1472 rc = dbgfR3BpInt3L2BstNodeAdd(pUVM, idxL1, hBp, pBp->Pub.u.Sw.GCPtr);
1473 if (rc != VINF_TRY_AGAIN)
1474 break;
1475 }
1476 }
1477
1478 if ( RT_SUCCESS(rc)
1479 && !cTries) /* Too much contention, abort with an error. */
1480 rc = VERR_DBGF_BP_INT3_ADD_TRIES_REACHED;
1481
1482 return rc;
1483}
1484
1485
1486/**
1487 * Adds the given port I/O breakpoint to the appropriate lookup tables.
1488 *
1489 * @returns VBox status code.
1490 * @param pUVM The user mode VM handle.
1491 * @param hBp The breakpoint handle to add.
1492 * @param pBp The internal breakpoint state.
1493 */
1494static int dbgfR3BpPortIoAdd(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1495{
1496 AssertReturn(DBGF_BP_PUB_GET_TYPE(&pBp->Pub) == DBGFBPTYPE_PORT_IO, VERR_DBGF_BP_IPE_3);
1497
1498 uint16_t uPortExcl = pBp->Pub.u.PortIo.uPort + pBp->Pub.u.PortIo.cPorts;
1499 uint32_t u32Entry = DBGF_BP_INT3_L1_ENTRY_CREATE_BP_HND(hBp);
1500 for (uint16_t idxPort = pBp->Pub.u.PortIo.uPort; idxPort < uPortExcl; idxPort++)
1501 {
1502 bool fXchg = ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocPortIoR3[idxPort], u32Entry, DBGF_BP_INT3_L1_ENTRY_TYPE_NULL);
1503 if (!fXchg)
1504 {
1505 /* Something raced us, so roll back the other registrations. */
1506 while (idxPort > pBp->Pub.u.PortIo.uPort)
1507 {
1508 fXchg = ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocPortIoR3[idxPort], DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, u32Entry);
1509 Assert(fXchg); RT_NOREF(fXchg);
1510 }
1511
1512 return VERR_DBGF_BP_INT3_ADD_TRIES_REACHED; /** @todo New status code */
1513 }
1514 }
1515
1516 return VINF_SUCCESS;
1517}
1518
1519
1520/**
1521 * Get a breakpoint give by address.
1522 *
1523 * @returns The breakpoint handle on success or NIL_DBGFBP if not found.
1524 * @param pUVM The user mode VM handle.
1525 * @param enmType The breakpoint type.
1526 * @param GCPtr The breakpoint address.
1527 * @param ppBp Where to store the pointer to the internal breakpoint state on success, optional.
1528 */
1529static DBGFBP dbgfR3BpGetByAddr(PUVM pUVM, DBGFBPTYPE enmType, RTGCUINTPTR GCPtr, PDBGFBPINT *ppBp)
1530{
1531 DBGFBP hBp = NIL_DBGFBP;
1532
1533 switch (enmType)
1534 {
1535 case DBGFBPTYPE_REG:
1536 {
1537 PVM pVM = pUVM->pVM;
1538 VM_ASSERT_VALID_EXT_RETURN(pVM, NIL_DBGFBP);
1539
1540 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); i++)
1541 {
1542 PDBGFBPHW pHwBp = &pVM->dbgf.s.aHwBreakpoints[i];
1543
1544 AssertCompileSize(DBGFBP, sizeof(uint32_t));
1545 DBGFBP hBpTmp = ASMAtomicReadU32(&pHwBp->hBp);
1546 if ( pHwBp->GCPtr == GCPtr
1547 && hBpTmp != NIL_DBGFBP)
1548 {
1549 hBp = hBpTmp;
1550 break;
1551 }
1552 }
1553 break;
1554 }
1555
1556 case DBGFBPTYPE_SOFTWARE:
1557 {
1558 const uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(GCPtr);
1559 const uint32_t u32L1Entry = ASMAtomicReadU32(&pUVM->dbgf.s.CTX_SUFF(paBpLocL1)[idxL1]);
1560
1561 if (u32L1Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1562 {
1563 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32L1Entry);
1564 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1565 hBp = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32L1Entry);
1566 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1567 {
1568 RTGCUINTPTR GCPtrKey = DBGF_BP_INT3_L2_KEY_EXTRACT_FROM_ADDR(GCPtr);
1569 PDBGFBPL2ENTRY pL2Nd = dbgfR3BpL2GetByIdx(pUVM, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32L1Entry));
1570
1571 for (;;)
1572 {
1573 AssertPtr(pL2Nd);
1574
1575 RTGCUINTPTR GCPtrL2Entry = DBGF_BP_L2_ENTRY_GET_GCPTR(pL2Nd->u64GCPtrKeyAndBpHnd1);
1576 if (GCPtrKey == GCPtrL2Entry)
1577 {
1578 hBp = DBGF_BP_L2_ENTRY_GET_BP_HND(pL2Nd->u64GCPtrKeyAndBpHnd1, pL2Nd->u64LeftRightIdxDepthBpHnd2);
1579 break;
1580 }
1581
1582 /* Not found, get to the next level. */
1583 uint32_t idxL2Next = GCPtrKey < GCPtrL2Entry
1584 ? DBGF_BP_L2_ENTRY_GET_IDX_LEFT(pL2Nd->u64LeftRightIdxDepthBpHnd2)
1585 : DBGF_BP_L2_ENTRY_GET_IDX_RIGHT(pL2Nd->u64LeftRightIdxDepthBpHnd2);
1586 /* Address not found if the entry denotes the end. */
1587 if (idxL2Next == DBGF_BP_L2_ENTRY_IDX_END)
1588 break;
1589
1590 pL2Nd = dbgfR3BpL2GetByIdx(pUVM, idxL2Next);
1591 }
1592 }
1593 }
1594 break;
1595 }
1596
1597 default:
1598 AssertMsgFailed(("enmType=%d\n", enmType));
1599 break;
1600 }
1601
1602 if ( hBp != NIL_DBGFBP
1603 && ppBp)
1604 *ppBp = dbgfR3BpGetByHnd(pUVM, hBp);
1605 return hBp;
1606}
1607
1608
1609/**
1610 * Get a port I/O breakpoint given by the range.
1611 *
1612 * @returns The breakpoint handle on success or NIL_DBGF if not found.
1613 * @param pUVM The user mode VM handle.
1614 * @param uPort First port in the range.
1615 * @param cPorts Number of ports in the range.
1616 * @param ppBp Where to store the pointer to the internal breakpoint state on success, optional.
1617 */
1618static DBGFBP dbgfR3BpPortIoGetByRange(PUVM pUVM, RTIOPORT uPort, RTIOPORT cPorts, PDBGFBPINT *ppBp)
1619{
1620 DBGFBP hBp = NIL_DBGFBP;
1621
1622 for (RTIOPORT idxPort = uPort; idxPort < uPort + cPorts; idxPort++)
1623 {
1624 const uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.CTX_SUFF(paBpLocPortIo)[idxPort]);
1625 if (u32Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL)
1626 {
1627 hBp = DBGF_BP_INT3_L1_ENTRY_GET_BP_HND(u32Entry);
1628 break;
1629 }
1630 }
1631
1632 if ( hBp != NIL_DBGFBP
1633 && ppBp)
1634 *ppBp = dbgfR3BpGetByHnd(pUVM, hBp);
1635 return hBp;
1636}
1637
1638
1639/**
1640 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1641 */
1642static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpInt3RemoveEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
1643{
1644 DBGFBP hBp = (DBGFBP)(uintptr_t)pvUser;
1645
1646 VMCPU_ASSERT_EMT(pVCpu);
1647 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
1648
1649 PUVM pUVM = pVM->pUVM;
1650 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
1651 AssertPtrReturn(pBp, VERR_DBGF_BP_IPE_8);
1652
1653 int rc = VINF_SUCCESS;
1654 if (pVCpu->idCpu == 0)
1655 {
1656 uint16_t idxL1 = DBGF_BP_INT3_L1_IDX_EXTRACT_FROM_ADDR(pBp->Pub.u.Sw.GCPtr);
1657 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1658 AssertReturn(u32Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, VERR_DBGF_BP_IPE_6);
1659
1660 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1661 if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND)
1662 {
1663 /* Single breakpoint, just exchange atomically with the null value. */
1664 if (!ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1], DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, u32Entry))
1665 {
1666 /*
1667 * A breakpoint addition must have raced us converting the L1 entry to an L2 index type, re-read
1668 * and remove the node from the created binary search tree.
1669 *
1670 * This works because after the entry was converted to an L2 index it can only be converted back
1671 * to a direct handle by removing one or more nodes which always goes through the fast mutex
1672 * protecting the L2 table. Likewise adding a new breakpoint requires grabbing the mutex as well
1673 * so there is serialization here and the node can be removed safely without having to worry about
1674 * concurrent tree modifications.
1675 */
1676 u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocL1R3[idxL1]);
1677 AssertReturn(DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry) == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX, VERR_DBGF_BP_IPE_9);
1678
1679 rc = dbgfR3BpInt3L2BstRemove(pUVM, idxL1, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry),
1680 hBp, pBp->Pub.u.Sw.GCPtr);
1681 }
1682 }
1683 else if (u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_L2_IDX)
1684 rc = dbgfR3BpInt3L2BstRemove(pUVM, idxL1, DBGF_BP_INT3_L1_ENTRY_GET_L2_IDX(u32Entry),
1685 hBp, pBp->Pub.u.Sw.GCPtr);
1686 }
1687
1688 return rc;
1689}
1690
1691
1692/**
1693 * Removes the given int3 breakpoint from all lookup tables.
1694 *
1695 * @returns VBox status code.
1696 * @param pUVM The user mode VM handle.
1697 * @param hBp The breakpoint handle to remove.
1698 * @param pBp The internal breakpoint state.
1699 */
1700static int dbgfR3BpInt3Remove(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1701{
1702 AssertReturn(DBGF_BP_PUB_GET_TYPE(&pBp->Pub) == DBGFBPTYPE_SOFTWARE, VERR_DBGF_BP_IPE_3);
1703
1704 /*
1705 * This has to be done by an EMT rendezvous in order to not have an EMT traversing
1706 * any L2 trees while it is being removed.
1707 */
1708 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpInt3RemoveEmtWorker, (void *)(uintptr_t)hBp);
1709}
1710
1711
1712/**
1713 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1714 */
1715static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpPortIoRemoveEmtWorker(PVM pVM, PVMCPU pVCpu, void *pvUser)
1716{
1717 DBGFBP hBp = (DBGFBP)(uintptr_t)pvUser;
1718
1719 VMCPU_ASSERT_EMT(pVCpu);
1720 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
1721
1722 PUVM pUVM = pVM->pUVM;
1723 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
1724 AssertPtrReturn(pBp, VERR_DBGF_BP_IPE_8);
1725
1726 int rc = VINF_SUCCESS;
1727 if (pVCpu->idCpu == 0)
1728 {
1729 /*
1730 * Remove the whole range, there shouldn't be any other breakpoint configured for this range as this is not
1731 * allowed right now.
1732 */
1733 uint16_t uPortExcl = pBp->Pub.u.PortIo.uPort + pBp->Pub.u.PortIo.cPorts;
1734 for (uint16_t idxPort = pBp->Pub.u.PortIo.uPort; idxPort < uPortExcl; idxPort++)
1735 {
1736 uint32_t u32Entry = ASMAtomicReadU32(&pUVM->dbgf.s.paBpLocPortIoR3[idxPort]);
1737 AssertReturn(u32Entry != DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, VERR_DBGF_BP_IPE_6);
1738
1739 uint8_t u8Type = DBGF_BP_INT3_L1_ENTRY_GET_TYPE(u32Entry);
1740 AssertReturn(u8Type == DBGF_BP_INT3_L1_ENTRY_TYPE_BP_HND, VERR_DBGF_BP_IPE_7);
1741
1742 bool fXchg = ASMAtomicCmpXchgU32(&pUVM->dbgf.s.paBpLocPortIoR3[idxPort], DBGF_BP_INT3_L1_ENTRY_TYPE_NULL, u32Entry);
1743 Assert(fXchg); RT_NOREF(fXchg);
1744 }
1745 }
1746
1747 return rc;
1748}
1749
1750
1751/**
1752 * Removes the given port I/O breakpoint from all lookup tables.
1753 *
1754 * @returns VBox status code.
1755 * @param pUVM The user mode VM handle.
1756 * @param hBp The breakpoint handle to remove.
1757 * @param pBp The internal breakpoint state.
1758 */
1759static int dbgfR3BpPortIoRemove(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1760{
1761 AssertReturn(DBGF_BP_PUB_GET_TYPE(&pBp->Pub) == DBGFBPTYPE_PORT_IO, VERR_DBGF_BP_IPE_3);
1762
1763 /*
1764 * This has to be done by an EMT rendezvous in order to not have an EMT accessing
1765 * the breakpoint while it is removed.
1766 */
1767 return VMMR3EmtRendezvous(pUVM->pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpPortIoRemoveEmtWorker, (void *)(uintptr_t)hBp);
1768}
1769
1770
1771/**
1772 * @callback_method_impl{FNVMMEMTRENDEZVOUS}
1773 */
1774static DECLCALLBACK(VBOXSTRICTRC) dbgfR3BpRegRecalcOnCpu(PVM pVM, PVMCPU pVCpu, void *pvUser)
1775{
1776 RT_NOREF(pvUser);
1777
1778#if defined(VBOX_VMM_TARGET_ARMV8)
1779 RT_NOREF(pVM, pVCpu);
1780 AssertReleaseFailed();
1781 return VERR_NOT_IMPLEMENTED;
1782#else
1783 /*
1784 * CPU 0 updates the enabled hardware breakpoint counts.
1785 */
1786 if (pVCpu->idCpu == 0)
1787 {
1788 pVM->dbgf.s.cEnabledHwBreakpoints = 0;
1789 pVM->dbgf.s.cEnabledHwIoBreakpoints = 0;
1790
1791 for (uint32_t iBp = 0; iBp < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints); iBp++)
1792 {
1793 if (pVM->dbgf.s.aHwBreakpoints[iBp].fEnabled)
1794 {
1795 pVM->dbgf.s.cEnabledHwBreakpoints += 1;
1796 pVM->dbgf.s.cEnabledHwIoBreakpoints += pVM->dbgf.s.aHwBreakpoints[iBp].fType == X86_DR7_RW_IO;
1797 }
1798 }
1799 }
1800
1801 return CPUMRecalcHyperDRx(pVCpu, UINT8_MAX);
1802#endif
1803}
1804
1805
1806/**
1807 * Arms the given breakpoint.
1808 *
1809 * @returns VBox status code.
1810 * @param pUVM The user mode VM handle.
1811 * @param hBp The breakpoint handle to arm.
1812 * @param pBp The internal breakpoint state pointer for the handle.
1813 *
1814 * @thread Any thread.
1815 */
1816static int dbgfR3BpArm(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1817{
1818 int rc;
1819 PVM pVM = pUVM->pVM;
1820
1821 Assert(!DBGF_BP_PUB_IS_ENABLED(&pBp->Pub));
1822 switch (DBGF_BP_PUB_GET_TYPE(&pBp->Pub))
1823 {
1824 case DBGFBPTYPE_REG:
1825 {
1826 Assert(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints));
1827 PDBGFBPHW pBpHw = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1828 Assert(pBpHw->hBp == hBp); RT_NOREF(hBp);
1829
1830 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1831 ASMAtomicWriteBool(&pBpHw->fEnabled, true);
1832 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpRegRecalcOnCpu, NULL);
1833 if (RT_FAILURE(rc))
1834 {
1835 ASMAtomicWriteBool(&pBpHw->fEnabled, false);
1836 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1837 }
1838 break;
1839 }
1840 case DBGFBPTYPE_SOFTWARE:
1841 {
1842 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1843
1844 /** @todo When we enable the first software breakpoint we should do this in an EMT rendezvous
1845 * as the VMX code intercepts #BP only when at least one int3 breakpoint is enabled.
1846 * A racing vCPU might trigger it and forward it to the guest causing panics/crashes/havoc. */
1847#ifdef VBOX_VMM_TARGET_ARMV8
1848 /*
1849 * Save original instruction and replace with brk
1850 */
1851 rc = PGMPhysSimpleReadGCPhys(pVM, &pBp->Pub.u.Sw.Arch.armv8.u32Org, pBp->Pub.u.Sw.PhysAddr, sizeof(pBp->Pub.u.Sw.Arch.armv8.u32Org));
1852 if (RT_SUCCESS(rc))
1853 {
1854 static const uint32_t s_u32Brk = Armv8A64MkInstrBrk(0xc0de);
1855 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Sw.PhysAddr, &s_u32Brk, sizeof(s_u32Brk));
1856 }
1857#else
1858 /*
1859 * Save current byte and write the int3 instruction byte.
1860 */
1861 rc = PGMPhysSimpleReadGCPhys(pVM, &pBp->Pub.u.Sw.Arch.x86.bOrg, pBp->Pub.u.Sw.PhysAddr, sizeof(pBp->Pub.u.Sw.Arch.x86.bOrg));
1862 if (RT_SUCCESS(rc))
1863 {
1864 static const uint8_t s_bInt3 = 0xcc;
1865 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Sw.PhysAddr, &s_bInt3, sizeof(s_bInt3));
1866 }
1867#endif
1868 if (RT_SUCCESS(rc))
1869 {
1870 ASMAtomicIncU32(&pVM->dbgf.s.cEnabledSwBreakpoints);
1871 Log(("DBGF: Set breakpoint at %RGv (Phys %RGp)\n", pBp->Pub.u.Sw.GCPtr, pBp->Pub.u.Sw.PhysAddr));
1872 }
1873
1874 if (RT_FAILURE(rc))
1875 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1876
1877 break;
1878 }
1879 case DBGFBPTYPE_PORT_IO:
1880 {
1881 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1882 ASMAtomicIncU32(&pUVM->dbgf.s.cPortIoBps);
1883 IOMR3NotifyBreakpointCountChange(pVM, true /*fPortIo*/, false /*fMmio*/);
1884 rc = VINF_SUCCESS;
1885 break;
1886 }
1887 case DBGFBPTYPE_MMIO:
1888 rc = VERR_NOT_IMPLEMENTED;
1889 break;
1890 default:
1891 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(&pBp->Pub)),
1892 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
1893 }
1894
1895 return rc;
1896}
1897
1898
1899/**
1900 * Disarms the given breakpoint.
1901 *
1902 * @returns VBox status code.
1903 * @param pUVM The user mode VM handle.
1904 * @param hBp The breakpoint handle to disarm.
1905 * @param pBp The internal breakpoint state pointer for the handle.
1906 *
1907 * @thread Any thread.
1908 */
1909static int dbgfR3BpDisarm(PUVM pUVM, DBGFBP hBp, PDBGFBPINT pBp)
1910{
1911 int rc;
1912 PVM pVM = pUVM->pVM;
1913
1914 Assert(DBGF_BP_PUB_IS_ENABLED(&pBp->Pub));
1915 switch (DBGF_BP_PUB_GET_TYPE(&pBp->Pub))
1916 {
1917 case DBGFBPTYPE_REG:
1918 {
1919 Assert(pBp->Pub.u.Reg.iReg < RT_ELEMENTS(pVM->dbgf.s.aHwBreakpoints));
1920 PDBGFBPHW pBpHw = &pVM->dbgf.s.aHwBreakpoints[pBp->Pub.u.Reg.iReg];
1921 Assert(pBpHw->hBp == hBp); RT_NOREF(hBp);
1922
1923 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1924 ASMAtomicWriteBool(&pBpHw->fEnabled, false);
1925 rc = VMMR3EmtRendezvous(pVM, VMMEMTRENDEZVOUS_FLAGS_TYPE_ALL_AT_ONCE, dbgfR3BpRegRecalcOnCpu, NULL);
1926 if (RT_FAILURE(rc))
1927 {
1928 ASMAtomicWriteBool(&pBpHw->fEnabled, true);
1929 dbgfR3BpSetEnabled(pBp, true /*fEnabled*/);
1930 }
1931 break;
1932 }
1933 case DBGFBPTYPE_SOFTWARE:
1934 {
1935 /*
1936 * Check that the current byte is the int3 instruction, and restore the original one.
1937 * We currently ignore invalid bytes.
1938 */
1939#ifdef VBOX_VMM_TARGET_ARMV8
1940 uint32_t u32Current = 0;
1941 rc = PGMPhysSimpleReadGCPhys(pVM, &u32Current, pBp->Pub.u.Sw.PhysAddr, sizeof(u32Current));
1942 if ( RT_SUCCESS(rc)
1943 && u32Current == Armv8A64MkInstrBrk(0xc0de))
1944 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Sw.PhysAddr, &pBp->Pub.u.Sw.Arch.armv8.u32Org, sizeof(pBp->Pub.u.Sw.Arch.armv8.u32Org));
1945#else
1946 uint8_t bCurrent = 0;
1947 rc = PGMPhysSimpleReadGCPhys(pVM, &bCurrent, pBp->Pub.u.Sw.PhysAddr, sizeof(bCurrent));
1948 if ( RT_SUCCESS(rc)
1949 && bCurrent == 0xcc)
1950 rc = PGMPhysSimpleWriteGCPhys(pVM, pBp->Pub.u.Sw.PhysAddr, &pBp->Pub.u.Sw.Arch.x86.bOrg, sizeof(pBp->Pub.u.Sw.Arch.x86.bOrg));
1951#endif
1952
1953 if (RT_SUCCESS(rc))
1954 {
1955 ASMAtomicDecU32(&pVM->dbgf.s.cEnabledSwBreakpoints);
1956 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1957 Log(("DBGF: Removed breakpoint at %RGv (Phys %RGp)\n", pBp->Pub.u.Sw.GCPtr, pBp->Pub.u.Sw.PhysAddr));
1958 }
1959 break;
1960 }
1961 case DBGFBPTYPE_PORT_IO:
1962 {
1963 dbgfR3BpSetEnabled(pBp, false /*fEnabled*/);
1964 uint32_t cPortIoBps = ASMAtomicDecU32(&pUVM->dbgf.s.cPortIoBps);
1965 if (!cPortIoBps) /** @todo Need to gather all EMTs to not have a stray EMT accessing BP data when it might go away. */
1966 IOMR3NotifyBreakpointCountChange(pVM, false /*fPortIo*/, false /*fMmio*/);
1967 rc = VINF_SUCCESS;
1968 break;
1969 }
1970 case DBGFBPTYPE_MMIO:
1971 rc = VERR_NOT_IMPLEMENTED;
1972 break;
1973 default:
1974 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(&pBp->Pub)),
1975 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
1976 }
1977
1978 return rc;
1979}
1980
1981
1982/**
1983 * Worker for DBGFR3BpHit() differnetiating on the breakpoint type.
1984 *
1985 * @returns Strict VBox status code.
1986 * @param pVM The cross context VM structure.
1987 * @param pVCpu The vCPU the breakpoint event happened on.
1988 * @param hBp The breakpoint handle.
1989 * @param pBp The breakpoint data.
1990 * @param pBpOwner The breakpoint owner data.
1991 *
1992 * @thread EMT
1993 */
1994static VBOXSTRICTRC dbgfR3BpHit(PVM pVM, PVMCPU pVCpu, DBGFBP hBp, PDBGFBPINT pBp, PCDBGFBPOWNERINT pBpOwner)
1995{
1996 VBOXSTRICTRC rcStrict = VINF_SUCCESS;
1997
1998 switch (DBGF_BP_PUB_GET_TYPE(&pBp->Pub))
1999 {
2000 case DBGFBPTYPE_REG:
2001 case DBGFBPTYPE_SOFTWARE:
2002 {
2003 if (DBGF_BP_PUB_IS_EXEC_BEFORE(&pBp->Pub))
2004 rcStrict = pBpOwner->pfnBpHitR3(pVM, pVCpu->idCpu, pBp->pvUserR3, hBp, &pBp->Pub, DBGF_BP_F_HIT_EXEC_BEFORE);
2005 if (rcStrict == VINF_SUCCESS)
2006 {
2007 uint8_t abInstr[DBGF_BP_INSN_MAX];
2008 RTGCPTR const GCPtrInstr = CPUMGetGuestFlatPC(pVCpu);
2009 int rc = PGMPhysSimpleReadGCPtr(pVCpu, &abInstr[0], GCPtrInstr, sizeof(abInstr));
2010 AssertRC(rc);
2011 if (RT_SUCCESS(rc))
2012 {
2013#ifdef VBOX_VMM_TARGET_ARMV8
2014 AssertFailed();
2015 rc = VERR_NOT_IMPLEMENTED;
2016#else
2017 /* Replace the int3 with the original instruction byte. */
2018 abInstr[0] = pBp->Pub.u.Sw.Arch.x86.bOrg;
2019 rcStrict = IEMExecOneWithPrefetchedByPC(pVCpu, GCPtrInstr, &abInstr[0], sizeof(abInstr));
2020#endif
2021 if ( rcStrict == VINF_SUCCESS
2022 && DBGF_BP_PUB_IS_EXEC_AFTER(&pBp->Pub))
2023 {
2024 VBOXSTRICTRC rcStrict2 = pBpOwner->pfnBpHitR3(pVM, pVCpu->idCpu, pBp->pvUserR3, hBp, &pBp->Pub,
2025 DBGF_BP_F_HIT_EXEC_AFTER);
2026 if (rcStrict2 == VINF_SUCCESS)
2027 return VBOXSTRICTRC_VAL(rcStrict);
2028 if (rcStrict2 != VINF_DBGF_BP_HALT)
2029 return VERR_DBGF_BP_OWNER_CALLBACK_WRONG_STATUS;
2030 }
2031 else
2032 return VBOXSTRICTRC_VAL(rcStrict);
2033 }
2034 }
2035 break;
2036 }
2037 case DBGFBPTYPE_PORT_IO:
2038 case DBGFBPTYPE_MMIO:
2039 {
2040 pVCpu->dbgf.s.fBpIoActive = false;
2041 rcStrict = pBpOwner->pfnBpIoHitR3(pVM, pVCpu->idCpu, pBp->pvUserR3, hBp, &pBp->Pub,
2042 pVCpu->dbgf.s.fBpIoBefore
2043 ? DBGF_BP_F_HIT_EXEC_BEFORE
2044 : DBGF_BP_F_HIT_EXEC_AFTER,
2045 pVCpu->dbgf.s.fBpIoAccess, pVCpu->dbgf.s.uBpIoAddress,
2046 pVCpu->dbgf.s.uBpIoValue);
2047
2048 break;
2049 }
2050 default:
2051 AssertMsgFailedReturn(("Invalid breakpoint type %d\n", DBGF_BP_PUB_GET_TYPE(&pBp->Pub)),
2052 VERR_IPE_NOT_REACHED_DEFAULT_CASE);
2053 }
2054
2055 return rcStrict;
2056}
2057
2058
2059/**
2060 * Creates a new breakpoint owner returning a handle which can be used when setting breakpoints.
2061 *
2062 * @returns VBox status code.
2063 * @retval VERR_DBGF_BP_OWNER_NO_MORE_HANDLES if there are no more free owner handles available.
2064 * @param pUVM The user mode VM handle.
2065 * @param pfnBpHit The R3 callback which is called when a breakpoint with the owner handle is hit.
2066 * @param pfnBpIoHit The R3 callback which is called when a I/O breakpoint with the owner handle is hit.
2067 * @param phBpOwner Where to store the owner handle on success.
2068 *
2069 * @thread Any thread but might defer work to EMT on the first call.
2070 */
2071VMMR3DECL(int) DBGFR3BpOwnerCreate(PUVM pUVM, PFNDBGFBPHIT pfnBpHit, PFNDBGFBPIOHIT pfnBpIoHit, PDBGFBPOWNER phBpOwner)
2072{
2073 /*
2074 * Validate the input.
2075 */
2076 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2077 AssertReturn(pfnBpHit || pfnBpIoHit, VERR_INVALID_PARAMETER);
2078 AssertPtrReturn(phBpOwner, VERR_INVALID_POINTER);
2079
2080 int rc = dbgfR3BpOwnerEnsureInit(pUVM);
2081 AssertRCReturn(rc ,rc);
2082
2083 /* Try to find a free entry in the owner table. */
2084 for (;;)
2085 {
2086 /* Scan the associated bitmap for a free entry. */
2087 int32_t iClr = ASMBitFirstClear(pUVM->dbgf.s.pbmBpOwnersAllocR3, DBGF_BP_OWNER_COUNT_MAX);
2088 if (iClr != -1)
2089 {
2090 /*
2091 * Try to allocate, we could get raced here as well. In that case
2092 * we try again.
2093 */
2094 if (!ASMAtomicBitTestAndSet(pUVM->dbgf.s.pbmBpOwnersAllocR3, iClr))
2095 {
2096 PDBGFBPOWNERINT pBpOwner = &pUVM->dbgf.s.paBpOwnersR3[iClr];
2097 pBpOwner->cRefs = 1;
2098 pBpOwner->pfnBpHitR3 = pfnBpHit;
2099 pBpOwner->pfnBpIoHitR3 = pfnBpIoHit;
2100
2101 *phBpOwner = (DBGFBPOWNER)iClr;
2102 return VINF_SUCCESS;
2103 }
2104 /* else Retry with another spot. */
2105 }
2106 else /* no free entry in bitmap, out of entries. */
2107 {
2108 rc = VERR_DBGF_BP_OWNER_NO_MORE_HANDLES;
2109 break;
2110 }
2111 }
2112
2113 return rc;
2114}
2115
2116
2117/**
2118 * Destroys the owner identified by the given handle.
2119 *
2120 * @returns VBox status code.
2121 * @retval VERR_INVALID_HANDLE if the given owner handle is invalid.
2122 * @retval VERR_DBGF_OWNER_BUSY if there are still breakpoints set with the given owner handle.
2123 * @param pUVM The user mode VM handle.
2124 * @param hBpOwner The breakpoint owner handle to destroy.
2125 */
2126VMMR3DECL(int) DBGFR3BpOwnerDestroy(PUVM pUVM, DBGFBPOWNER hBpOwner)
2127{
2128 /*
2129 * Validate the input.
2130 */
2131 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2132 AssertReturn(hBpOwner != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2133
2134 int rc = dbgfR3BpOwnerEnsureInit(pUVM);
2135 AssertRCReturn(rc ,rc);
2136
2137 PDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pUVM, hBpOwner);
2138 if (RT_LIKELY(pBpOwner))
2139 {
2140 if (ASMAtomicReadU32(&pBpOwner->cRefs) == 1)
2141 {
2142 pBpOwner->pfnBpHitR3 = NULL;
2143 ASMAtomicDecU32(&pBpOwner->cRefs);
2144 ASMAtomicBitClear(pUVM->dbgf.s.pbmBpOwnersAllocR3, hBpOwner);
2145 }
2146 else
2147 rc = VERR_DBGF_OWNER_BUSY;
2148 }
2149 else
2150 rc = VERR_INVALID_HANDLE;
2151
2152 return rc;
2153}
2154
2155
2156/**
2157 * Sets a breakpoint (int 3 based).
2158 *
2159 * @returns VBox status code.
2160 * @param pUVM The user mode VM handle.
2161 * @param idSrcCpu The ID of the virtual CPU used for the
2162 * breakpoint address resolution.
2163 * @param pAddress The address of the breakpoint.
2164 * @param iHitTrigger The hit count at which the breakpoint start triggering.
2165 * Use 0 (or 1) if it's gonna trigger at once.
2166 * @param iHitDisable The hit count which disables the breakpoint.
2167 * Use ~(uint64_t) if it's never gonna be disabled.
2168 * @param phBp Where to store the breakpoint handle on success.
2169 *
2170 * @thread Any thread.
2171 */
2172VMMR3DECL(int) DBGFR3BpSetInt3(PUVM pUVM, VMCPUID idSrcCpu, PCDBGFADDRESS pAddress,
2173 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2174{
2175 return DBGFR3BpSetInt3Ex(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, idSrcCpu, pAddress,
2176 DBGF_BP_F_DEFAULT, iHitTrigger, iHitDisable, phBp);
2177}
2178
2179
2180/**
2181 * Sets a breakpoint (int 3 based) - extended version.
2182 *
2183 * @returns VBox status code.
2184 * @param pUVM The user mode VM handle.
2185 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2186 * @param pvUser Opaque user data to pass in the owner callback.
2187 * @param idSrcCpu The ID of the virtual CPU used for the
2188 * breakpoint address resolution.
2189 * @param pAddress The address of the breakpoint.
2190 * @param fFlags Combination of DBGF_BP_F_XXX.
2191 * @param iHitTrigger The hit count at which the breakpoint start triggering.
2192 * Use 0 (or 1) if it's gonna trigger at once.
2193 * @param iHitDisable The hit count which disables the breakpoint.
2194 * Use ~(uint64_t) if it's never gonna be disabled.
2195 * @param phBp Where to store the breakpoint handle on success.
2196 *
2197 * @thread Any thread.
2198 */
2199VMMR3DECL(int) DBGFR3BpSetInt3Ex(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2200 VMCPUID idSrcCpu, PCDBGFADDRESS pAddress, uint16_t fFlags,
2201 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2202{
2203 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2204 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2205 AssertReturn(DBGFR3AddrIsValid(pUVM, pAddress), VERR_INVALID_PARAMETER);
2206 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2207 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2208
2209 int rc = dbgfR3BpEnsureInit(pUVM);
2210 AssertRCReturn(rc, rc);
2211
2212 /*
2213 * Translate & save the breakpoint address into a guest-physical address.
2214 */
2215 RTGCPHYS GCPhysBpAddr = NIL_RTGCPHYS;
2216 rc = DBGFR3AddrToPhys(pUVM, idSrcCpu, pAddress, &GCPhysBpAddr);
2217 if (RT_SUCCESS(rc))
2218 {
2219 /*
2220 * The physical address from DBGFR3AddrToPhys() is the start of the page,
2221 * we need the exact byte offset into the page while writing to it in dbgfR3BpInt3Arm().
2222 */
2223 GCPhysBpAddr |= (pAddress->FlatPtr & X86_PAGE_OFFSET_MASK);
2224
2225 PDBGFBPINT pBp = NULL;
2226 DBGFBP hBp = dbgfR3BpGetByAddr(pUVM, DBGFBPTYPE_SOFTWARE, pAddress->FlatPtr, &pBp);
2227 if ( hBp != NIL_DBGFBP
2228 && pBp->Pub.u.Sw.PhysAddr == GCPhysBpAddr)
2229 {
2230 rc = VINF_SUCCESS;
2231 if ( !DBGF_BP_PUB_IS_ENABLED(&pBp->Pub)
2232 && (fFlags & DBGF_BP_F_ENABLED))
2233 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2234 if (RT_SUCCESS(rc))
2235 {
2236 rc = VINF_DBGF_BP_ALREADY_EXIST;
2237 *phBp = hBp;
2238 }
2239 return rc;
2240 }
2241
2242 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_SOFTWARE, fFlags, iHitTrigger, iHitDisable, &hBp, &pBp);
2243 if (RT_SUCCESS(rc))
2244 {
2245 pBp->Pub.u.Sw.PhysAddr = GCPhysBpAddr;
2246 pBp->Pub.u.Sw.GCPtr = pAddress->FlatPtr;
2247
2248 /* Add the breakpoint to the lookup tables. */
2249 rc = dbgfR3BpInt3Add(pUVM, hBp, pBp);
2250 if (RT_SUCCESS(rc))
2251 {
2252 /* Enable the breakpoint if requested. */
2253 if (fFlags & DBGF_BP_F_ENABLED)
2254 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2255 if (RT_SUCCESS(rc))
2256 {
2257 *phBp = hBp;
2258 return VINF_SUCCESS;
2259 }
2260
2261 int rc2 = dbgfR3BpInt3Remove(pUVM, hBp, pBp); AssertRC(rc2);
2262 }
2263
2264 dbgfR3BpFree(pUVM, hBp, pBp);
2265 }
2266 }
2267
2268 return rc;
2269}
2270
2271
2272/**
2273 * Sets a register breakpoint.
2274 *
2275 * @returns VBox status code.
2276 * @param pUVM The user mode VM handle.
2277 * @param pAddress The address of the breakpoint.
2278 * @param iHitTrigger The hit count at which the breakpoint start triggering.
2279 * Use 0 (or 1) if it's gonna trigger at once.
2280 * @param iHitDisable The hit count which disables the breakpoint.
2281 * Use ~(uint64_t) if it's never gonna be disabled.
2282 * @param fType The access type (one of the X86_DR7_RW_* defines).
2283 * @param cb The access size - 1,2,4 or 8 (the latter is AMD64 long mode only.
2284 * Must be 1 if fType is X86_DR7_RW_EO.
2285 * @param phBp Where to store the breakpoint handle.
2286 *
2287 * @thread Any thread.
2288 */
2289VMMR3DECL(int) DBGFR3BpSetReg(PUVM pUVM, PCDBGFADDRESS pAddress, uint64_t iHitTrigger,
2290 uint64_t iHitDisable, uint8_t fType, uint8_t cb, PDBGFBP phBp)
2291{
2292 return DBGFR3BpSetRegEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, pAddress,
2293 DBGF_BP_F_DEFAULT, iHitTrigger, iHitDisable, fType, cb, phBp);
2294}
2295
2296
2297/**
2298 * Sets a register breakpoint - extended version.
2299 *
2300 * @returns VBox status code.
2301 * @param pUVM The user mode VM handle.
2302 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2303 * @param pvUser Opaque user data to pass in the owner callback.
2304 * @param pAddress The address of the breakpoint.
2305 * @param fFlags Combination of DBGF_BP_F_XXX.
2306 * @param iHitTrigger The hit count at which the breakpoint start triggering.
2307 * Use 0 (or 1) if it's gonna trigger at once.
2308 * @param iHitDisable The hit count which disables the breakpoint.
2309 * Use ~(uint64_t) if it's never gonna be disabled.
2310 * @param fType The access type (one of the X86_DR7_RW_* defines).
2311 * @param cb The access size - 1,2,4 or 8 (the latter is AMD64 long mode only.
2312 * Must be 1 if fType is X86_DR7_RW_EO.
2313 * @param phBp Where to store the breakpoint handle.
2314 *
2315 * @thread Any thread.
2316 */
2317VMMR3DECL(int) DBGFR3BpSetRegEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2318 PCDBGFADDRESS pAddress, uint16_t fFlags,
2319 uint64_t iHitTrigger, uint64_t iHitDisable,
2320 uint8_t fType, uint8_t cb, PDBGFBP phBp)
2321{
2322 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2323 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2324 AssertReturn(DBGFR3AddrIsValid(pUVM, pAddress), VERR_INVALID_PARAMETER);
2325 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2326 AssertReturn(cb > 0 && cb <= 8 && RT_IS_POWER_OF_TWO(cb), VERR_INVALID_PARAMETER);
2327 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2328 switch (fType)
2329 {
2330 case X86_DR7_RW_EO:
2331 AssertMsgReturn(cb == 1, ("fType=%#x cb=%d != 1\n", fType, cb), VERR_INVALID_PARAMETER);
2332 break;
2333 case X86_DR7_RW_IO:
2334 case X86_DR7_RW_RW:
2335 case X86_DR7_RW_WO:
2336 break;
2337 default:
2338 AssertMsgFailedReturn(("fType=%#x\n", fType), VERR_INVALID_PARAMETER);
2339 }
2340
2341 int rc = dbgfR3BpEnsureInit(pUVM);
2342 AssertRCReturn(rc, rc);
2343
2344 /*
2345 * Check if we've already got a matching breakpoint for that address.
2346 */
2347 PDBGFBPINT pBp = NULL;
2348 DBGFBP hBp = dbgfR3BpGetByAddr(pUVM, DBGFBPTYPE_REG, pAddress->FlatPtr, &pBp);
2349 if ( hBp != NIL_DBGFBP
2350 && pBp->Pub.u.Reg.cb == cb
2351 && pBp->Pub.u.Reg.fType == fType)
2352 {
2353 rc = VINF_SUCCESS;
2354 if (!DBGF_BP_PUB_IS_ENABLED(&pBp->Pub) && (fFlags & DBGF_BP_F_ENABLED))
2355 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2356 /* else: We don't disable it when DBGF_BP_F_ENABLED isn't given. */
2357 if (RT_SUCCESS(rc))
2358 {
2359 rc = VINF_DBGF_BP_ALREADY_EXIST;
2360 *phBp = hBp;
2361 }
2362 return rc;
2363 }
2364
2365 /*
2366 * Allocate new breakpoint.
2367 */
2368 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_REG, fFlags, iHitTrigger, iHitDisable, &hBp, &pBp);
2369 if (RT_SUCCESS(rc))
2370 {
2371 pBp->Pub.u.Reg.GCPtr = pAddress->FlatPtr;
2372 pBp->Pub.u.Reg.fType = fType;
2373 pBp->Pub.u.Reg.cb = cb;
2374 pBp->Pub.u.Reg.iReg = UINT8_MAX;
2375 ASMCompilerBarrier();
2376
2377 /* Assign the proper hardware breakpoint. */
2378 rc = dbgfR3BpRegAssign(pUVM->pVM, hBp, pBp);
2379 if (RT_SUCCESS(rc))
2380 {
2381 /* Arm the breakpoint. */
2382 if (fFlags & DBGF_BP_F_ENABLED)
2383 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2384 if (RT_SUCCESS(rc))
2385 {
2386 *phBp = hBp;
2387 return VINF_SUCCESS;
2388 }
2389
2390 int rc2 = dbgfR3BpRegRemove(pUVM->pVM, hBp, pBp);
2391 AssertRC(rc2); RT_NOREF(rc2);
2392 }
2393
2394 dbgfR3BpFree(pUVM, hBp, pBp);
2395 }
2396
2397 return rc;
2398}
2399
2400
2401/**
2402 * This is only kept for now to not mess with the debugger implementation at this point,
2403 * recompiler breakpoints are not supported anymore (IEM has some API but it isn't implemented
2404 * and should probably be merged with the DBGF breakpoints).
2405 */
2406VMMR3DECL(int) DBGFR3BpSetREM(PUVM pUVM, PCDBGFADDRESS pAddress, uint64_t iHitTrigger,
2407 uint64_t iHitDisable, PDBGFBP phBp)
2408{
2409 RT_NOREF(pUVM, pAddress, iHitTrigger, iHitDisable, phBp);
2410 return VERR_NOT_SUPPORTED;
2411}
2412
2413
2414/**
2415 * Sets an I/O port breakpoint.
2416 *
2417 * @returns VBox status code.
2418 * @param pUVM The user mode VM handle.
2419 * @param uPort The first I/O port.
2420 * @param cPorts The number of I/O ports, see DBGFBPIOACCESS_XXX.
2421 * @param fAccess The access we want to break on.
2422 * @param iHitTrigger The hit count at which the breakpoint start
2423 * triggering. Use 0 (or 1) if it's gonna trigger at
2424 * once.
2425 * @param iHitDisable The hit count which disables the breakpoint.
2426 * Use ~(uint64_t) if it's never gonna be disabled.
2427 * @param phBp Where to store the breakpoint handle.
2428 *
2429 * @thread Any thread.
2430 */
2431VMMR3DECL(int) DBGFR3BpSetPortIo(PUVM pUVM, RTIOPORT uPort, RTIOPORT cPorts, uint32_t fAccess,
2432 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2433{
2434 return DBGFR3BpSetPortIoEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, uPort, cPorts, fAccess,
2435 DBGF_BP_F_DEFAULT, iHitTrigger, iHitDisable, phBp);
2436}
2437
2438
2439/**
2440 * Sets an I/O port breakpoint - extended version.
2441 *
2442 * @returns VBox status code.
2443 * @param pUVM The user mode VM handle.
2444 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2445 * @param pvUser Opaque user data to pass in the owner callback.
2446 * @param uPort The first I/O port.
2447 * @param cPorts The number of I/O ports, see DBGFBPIOACCESS_XXX.
2448 * @param fAccess The access we want to break on.
2449 * @param fFlags Combination of DBGF_BP_F_XXX.
2450 * @param iHitTrigger The hit count at which the breakpoint start
2451 * triggering. Use 0 (or 1) if it's gonna trigger at
2452 * once.
2453 * @param iHitDisable The hit count which disables the breakpoint.
2454 * Use ~(uint64_t) if it's never gonna be disabled.
2455 * @param phBp Where to store the breakpoint handle.
2456 *
2457 * @thread Any thread.
2458 */
2459VMMR3DECL(int) DBGFR3BpSetPortIoEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2460 RTIOPORT uPort, RTIOPORT cPorts, uint32_t fAccess,
2461 uint32_t fFlags, uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2462{
2463 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2464 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2465 AssertReturn(!(fAccess & ~DBGFBPIOACCESS_VALID_MASK_PORT_IO), VERR_INVALID_FLAGS);
2466 AssertReturn(fAccess, VERR_INVALID_FLAGS);
2467 AssertReturn(!(fFlags & ~DBGF_BP_F_VALID_MASK), VERR_INVALID_FLAGS);
2468 AssertReturn(fFlags, VERR_INVALID_FLAGS);
2469 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2470 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2471 AssertReturn(cPorts > 0, VERR_OUT_OF_RANGE);
2472 AssertReturn((RTIOPORT)(uPort + (cPorts - 1)) >= uPort, VERR_OUT_OF_RANGE);
2473
2474 int rc = dbgfR3BpPortIoEnsureInit(pUVM);
2475 AssertRCReturn(rc, rc);
2476
2477 PDBGFBPINT pBp = NULL;
2478 DBGFBP hBp = dbgfR3BpPortIoGetByRange(pUVM, uPort, cPorts, &pBp);
2479 if ( hBp != NIL_DBGFBP
2480 && pBp->Pub.u.PortIo.uPort == uPort
2481 && pBp->Pub.u.PortIo.cPorts == cPorts
2482 && pBp->Pub.u.PortIo.fAccess == fAccess)
2483 {
2484 rc = VINF_SUCCESS;
2485 if (!DBGF_BP_PUB_IS_ENABLED(&pBp->Pub))
2486 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2487 if (RT_SUCCESS(rc))
2488 {
2489 rc = VINF_DBGF_BP_ALREADY_EXIST;
2490 *phBp = hBp;
2491 }
2492 return rc;
2493 }
2494
2495 rc = dbgfR3BpAlloc(pUVM, hOwner, pvUser, DBGFBPTYPE_PORT_IO, fFlags, iHitTrigger, iHitDisable, &hBp, &pBp);
2496 if (RT_SUCCESS(rc))
2497 {
2498 pBp->Pub.u.PortIo.uPort = uPort;
2499 pBp->Pub.u.PortIo.cPorts = cPorts;
2500 pBp->Pub.u.PortIo.fAccess = fAccess;
2501
2502 /* Add the breakpoint to the lookup tables. */
2503 rc = dbgfR3BpPortIoAdd(pUVM, hBp, pBp);
2504 if (RT_SUCCESS(rc))
2505 {
2506 /* Enable the breakpoint if requested. */
2507 if (fFlags & DBGF_BP_F_ENABLED)
2508 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2509 if (RT_SUCCESS(rc))
2510 {
2511 *phBp = hBp;
2512 return VINF_SUCCESS;
2513 }
2514
2515 int rc2 = dbgfR3BpPortIoRemove(pUVM, hBp, pBp); AssertRC(rc2);
2516 }
2517
2518 dbgfR3BpFree(pUVM, hBp, pBp);
2519 }
2520
2521 return rc;
2522}
2523
2524
2525/**
2526 * Sets a memory mapped I/O breakpoint.
2527 *
2528 * @returns VBox status code.
2529 * @param pUVM The user mode VM handle.
2530 * @param GCPhys The first MMIO address.
2531 * @param cb The size of the MMIO range to break on.
2532 * @param fAccess The access we want to break on.
2533 * @param iHitTrigger The hit count at which the breakpoint start
2534 * triggering. Use 0 (or 1) if it's gonna trigger at
2535 * once.
2536 * @param iHitDisable The hit count which disables the breakpoint.
2537 * Use ~(uint64_t) if it's never gonna be disabled.
2538 * @param phBp Where to store the breakpoint handle.
2539 *
2540 * @thread Any thread.
2541 */
2542VMMR3DECL(int) DBGFR3BpSetMmio(PUVM pUVM, RTGCPHYS GCPhys, uint32_t cb, uint32_t fAccess,
2543 uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2544{
2545 return DBGFR3BpSetMmioEx(pUVM, NIL_DBGFBPOWNER, NULL /*pvUser*/, GCPhys, cb, fAccess,
2546 DBGF_BP_F_DEFAULT, iHitTrigger, iHitDisable, phBp);
2547}
2548
2549
2550/**
2551 * Sets a memory mapped I/O breakpoint - extended version.
2552 *
2553 * @returns VBox status code.
2554 * @param pUVM The user mode VM handle.
2555 * @param hOwner The owner handle, use NIL_DBGFBPOWNER if no special owner attached.
2556 * @param pvUser Opaque user data to pass in the owner callback.
2557 * @param GCPhys The first MMIO address.
2558 * @param cb The size of the MMIO range to break on.
2559 * @param fAccess The access we want to break on.
2560 * @param fFlags Combination of DBGF_BP_F_XXX.
2561 * @param iHitTrigger The hit count at which the breakpoint start
2562 * triggering. Use 0 (or 1) if it's gonna trigger at
2563 * once.
2564 * @param iHitDisable The hit count which disables the breakpoint.
2565 * Use ~(uint64_t) if it's never gonna be disabled.
2566 * @param phBp Where to store the breakpoint handle.
2567 *
2568 * @thread Any thread.
2569 */
2570VMMR3DECL(int) DBGFR3BpSetMmioEx(PUVM pUVM, DBGFBPOWNER hOwner, void *pvUser,
2571 RTGCPHYS GCPhys, uint32_t cb, uint32_t fAccess,
2572 uint32_t fFlags, uint64_t iHitTrigger, uint64_t iHitDisable, PDBGFBP phBp)
2573{
2574 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2575 AssertReturn(hOwner != NIL_DBGFBPOWNER || pvUser == NULL, VERR_INVALID_PARAMETER);
2576 AssertReturn(!(fAccess & ~DBGFBPIOACCESS_VALID_MASK_MMIO), VERR_INVALID_FLAGS);
2577 AssertReturn(fAccess, VERR_INVALID_FLAGS);
2578 AssertReturn(!(fFlags & ~DBGF_BP_F_VALID_MASK), VERR_INVALID_FLAGS);
2579 AssertReturn(fFlags, VERR_INVALID_FLAGS);
2580 AssertReturn(iHitTrigger <= iHitDisable, VERR_INVALID_PARAMETER);
2581 AssertPtrReturn(phBp, VERR_INVALID_POINTER);
2582 AssertReturn(cb, VERR_OUT_OF_RANGE);
2583 AssertReturn(GCPhys + cb < GCPhys, VERR_OUT_OF_RANGE);
2584
2585 int rc = dbgfR3BpEnsureInit(pUVM);
2586 AssertRCReturn(rc, rc);
2587
2588 return VERR_NOT_IMPLEMENTED;
2589}
2590
2591
2592/**
2593 * Clears a breakpoint.
2594 *
2595 * @returns VBox status code.
2596 * @param pUVM The user mode VM handle.
2597 * @param hBp The handle of the breakpoint which should be removed (cleared).
2598 *
2599 * @thread Any thread.
2600 */
2601VMMR3DECL(int) DBGFR3BpClear(PUVM pUVM, DBGFBP hBp)
2602{
2603 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2604 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2605
2606 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2607 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2608
2609 /* Disarm the breakpoint when it is enabled. */
2610 if (DBGF_BP_PUB_IS_ENABLED(&pBp->Pub))
2611 {
2612 int rc = dbgfR3BpDisarm(pUVM, hBp, pBp);
2613 AssertRC(rc);
2614 }
2615
2616 switch (DBGF_BP_PUB_GET_TYPE(&pBp->Pub))
2617 {
2618 case DBGFBPTYPE_REG:
2619 {
2620 int rc = dbgfR3BpRegRemove(pUVM->pVM, hBp, pBp);
2621 AssertRC(rc);
2622 break;
2623 }
2624 case DBGFBPTYPE_SOFTWARE:
2625 {
2626 int rc = dbgfR3BpInt3Remove(pUVM, hBp, pBp);
2627 AssertRC(rc);
2628 break;
2629 }
2630 case DBGFBPTYPE_PORT_IO:
2631 {
2632 int rc = dbgfR3BpPortIoRemove(pUVM, hBp, pBp);
2633 AssertRC(rc);
2634 break;
2635 }
2636 default:
2637 break;
2638 }
2639
2640 dbgfR3BpFree(pUVM, hBp, pBp);
2641 return VINF_SUCCESS;
2642}
2643
2644
2645/**
2646 * Enables a breakpoint.
2647 *
2648 * @returns VBox status code.
2649 * @param pUVM The user mode VM handle.
2650 * @param hBp The handle of the breakpoint which should be enabled.
2651 *
2652 * @thread Any thread.
2653 */
2654VMMR3DECL(int) DBGFR3BpEnable(PUVM pUVM, DBGFBP hBp)
2655{
2656 /*
2657 * Validate the input.
2658 */
2659 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2660 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2661
2662 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2663 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2664
2665 int rc;
2666 if (!DBGF_BP_PUB_IS_ENABLED(&pBp->Pub))
2667 rc = dbgfR3BpArm(pUVM, hBp, pBp);
2668 else
2669 rc = VINF_DBGF_BP_ALREADY_ENABLED;
2670
2671 return rc;
2672}
2673
2674
2675/**
2676 * Disables a breakpoint.
2677 *
2678 * @returns VBox status code.
2679 * @param pUVM The user mode VM handle.
2680 * @param hBp The handle of the breakpoint which should be disabled.
2681 *
2682 * @thread Any thread.
2683 */
2684VMMR3DECL(int) DBGFR3BpDisable(PUVM pUVM, DBGFBP hBp)
2685{
2686 /*
2687 * Validate the input.
2688 */
2689 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2690 AssertReturn(hBp != NIL_DBGFBPOWNER, VERR_INVALID_HANDLE);
2691
2692 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2693 AssertPtrReturn(pBp, VERR_DBGF_BP_NOT_FOUND);
2694
2695 int rc;
2696 if (DBGF_BP_PUB_IS_ENABLED(&pBp->Pub))
2697 rc = dbgfR3BpDisarm(pUVM, hBp, pBp);
2698 else
2699 rc = VINF_DBGF_BP_ALREADY_DISABLED;
2700
2701 return rc;
2702}
2703
2704
2705/**
2706 * Enumerate the breakpoints.
2707 *
2708 * @returns VBox status code.
2709 * @param pUVM The user mode VM handle.
2710 * @param pfnCallback The callback function.
2711 * @param pvUser The user argument to pass to the callback.
2712 *
2713 * @thread Any thread.
2714 */
2715VMMR3DECL(int) DBGFR3BpEnum(PUVM pUVM, PFNDBGFBPENUM pfnCallback, void *pvUser)
2716{
2717 UVM_ASSERT_VALID_EXT_RETURN(pUVM, VERR_INVALID_VM_HANDLE);
2718
2719 for (uint32_t idChunk = 0; idChunk < RT_ELEMENTS(pUVM->dbgf.s.aBpChunks); idChunk++)
2720 {
2721 PDBGFBPCHUNKR3 pBpChunk = &pUVM->dbgf.s.aBpChunks[idChunk];
2722
2723 if (pBpChunk->idChunk == DBGF_BP_CHUNK_ID_INVALID)
2724 break; /* Stop here as the first non allocated chunk means there is no one allocated afterwards as well. */
2725
2726 if (pBpChunk->cBpsFree < DBGF_BP_COUNT_PER_CHUNK)
2727 {
2728 /* Scan the bitmap for allocated entries. */
2729 int32_t iAlloc = ASMBitFirstSet(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK);
2730 if (iAlloc != -1)
2731 {
2732 do
2733 {
2734 DBGFBP hBp = DBGF_BP_HND_CREATE(idChunk, (uint32_t)iAlloc);
2735 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pUVM, hBp);
2736
2737 /* Make a copy of the breakpoints public data to have a consistent view. */
2738 DBGFBPPUB BpPub;
2739 BpPub.cHits = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.cHits);
2740 BpPub.iHitTrigger = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.iHitTrigger);
2741 BpPub.iHitDisable = ASMAtomicReadU64((volatile uint64_t *)&pBp->Pub.iHitDisable);
2742 BpPub.hOwner = ASMAtomicReadU32((volatile uint32_t *)&pBp->Pub.hOwner);
2743 BpPub.u16Type = ASMAtomicReadU16((volatile uint16_t *)&pBp->Pub.u16Type); /* Actually constant. */
2744 BpPub.fFlags = ASMAtomicReadU16((volatile uint16_t *)&pBp->Pub.fFlags);
2745 memcpy(&BpPub.u, &pBp->Pub.u, sizeof(pBp->Pub.u)); /* Is constant after allocation. */
2746
2747 /* Check if a removal raced us. */
2748 if (ASMBitTest(pBpChunk->pbmAlloc, iAlloc))
2749 {
2750 int rc = pfnCallback(pUVM, pvUser, hBp, &BpPub);
2751 if (RT_FAILURE(rc) || rc == VINF_CALLBACK_RETURN)
2752 return rc;
2753 }
2754
2755 iAlloc = ASMBitNextSet(pBpChunk->pbmAlloc, DBGF_BP_COUNT_PER_CHUNK, iAlloc);
2756 } while (iAlloc != -1);
2757 }
2758 }
2759 }
2760
2761 return VINF_SUCCESS;
2762}
2763
2764
2765/**
2766 * Called whenever a breakpoint event needs to be serviced in ring-3 to decide what to do.
2767 *
2768 * @returns VBox status code.
2769 * @param pVM The cross context VM structure.
2770 * @param pVCpu The vCPU the breakpoint event happened on.
2771 *
2772 * @thread EMT
2773 */
2774VMMR3_INT_DECL(int) DBGFR3BpHit(PVM pVM, PVMCPU pVCpu)
2775{
2776 /* Send it straight into the debugger?. */
2777 if (pVCpu->dbgf.s.fBpInvokeOwnerCallback)
2778 {
2779 DBGFBP hBp = pVCpu->dbgf.s.hBpActive;
2780 pVCpu->dbgf.s.fBpInvokeOwnerCallback = false;
2781
2782 PDBGFBPINT pBp = dbgfR3BpGetByHnd(pVM->pUVM, hBp);
2783 AssertReturn(pBp, VERR_DBGF_BP_IPE_9);
2784
2785 /* Resolve owner (can be NIL_DBGFBPOWNER) and invoke callback if there is one. */
2786 if (pBp->Pub.hOwner != NIL_DBGFBPOWNER)
2787 {
2788 PCDBGFBPOWNERINT pBpOwner = dbgfR3BpOwnerGetByHnd(pVM->pUVM, pBp->Pub.hOwner);
2789 if (pBpOwner)
2790 {
2791 VBOXSTRICTRC rcStrict = dbgfR3BpHit(pVM, pVCpu, hBp, pBp, pBpOwner);
2792 if (VBOXSTRICTRC_VAL(rcStrict) == VINF_SUCCESS)
2793 {
2794 pVCpu->dbgf.s.hBpActive = NIL_DBGFBP;
2795 return VINF_SUCCESS;
2796 }
2797 if (VBOXSTRICTRC_VAL(rcStrict) != VINF_DBGF_BP_HALT) /* Guru meditation. */
2798 return VERR_DBGF_BP_OWNER_CALLBACK_WRONG_STATUS;
2799 /* else: Halt in the debugger. */
2800 }
2801 }
2802 }
2803
2804 return DBGFR3EventBreakpoint(pVM, DBGFEVENT_BREAKPOINT);
2805}
2806
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette