Skip to content

libusb: protect devs list with mutex in process_hotplug_event#783

Draft
Copilot wants to merge 8 commits intoconnection-callbackfrom
copilot/optimize-hidapi-thread-wait
Draft

libusb: protect devs list with mutex in process_hotplug_event#783
Copilot wants to merge 8 commits intoconnection-callbackfrom
copilot/optimize-hidapi-thread-wait

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 30, 2026

process_hotplug_event() modifies devs without holding hid_hotplug_context.mutex, while hid_hotplug_register_callback() iterates devs under that mutex during HID_API_HOTPLUG_ENUMERATE. A concurrent device disconnect can free a node mid-traversal → use-after-free.

The other three backends (linux, windows, mac) already hold their main mutex for the entire event processing block. The libusb backend was the outlier because callback_thread treated devs as exclusively owned, but the ENUMERATE path in register_callback violates that assumption.

  • Wrap process_hotplug_event() with pthread_mutex_lock/unlock(&hid_hotplug_context.mutex). The mutex is recursive, so the nested acquire inside hid_internal_invoke_callbacks() is safe.
  • Update the devs field comment to reflect the actual synchronization contract.
static void process_hotplug_event(struct hid_hotplug_queue* msg)
{
	pthread_mutex_lock(&hid_hotplug_context.mutex);

	if (msg->event == LIBUSB_HOTPLUG_EVENT_DEVICE_ARRIVED) {
		// ... enumerate, invoke callbacks, append to devs ...
	}
	else if (msg->event == LIBUSB_HOTPLUG_EVENT_DEVICE_LEFT) {
		// ... detach from devs, invoke callbacks, free ...
	}

	pthread_mutex_unlock(&hid_hotplug_context.mutex);

	libusb_unref_device(msg->device);
}

🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

Copilot AI and others added 2 commits March 30, 2026 09:45
…attern

Replace broken hidapi_thread_cond_timedwait (which used a fixed absolute
timestamp in the past, causing a spin loop) with hidapi_thread_cond_wait
using the standard condition variable predicate pattern.

- callback_thread: Use cond_wait with proper predicate loop for zero CPU
  overhead when idle and immediate response on events/shutdown
- hid_libusb_hotplug_callback: Protect queue and signal under
  callback_thread.mutex to prevent lost wakeups
- hotplug_thread: Lock mutex before signaling for clean shutdown

Agent-Logs-Url: https://github.com/libusb/hidapi/sessions/a47a730b-fbb6-4042-9afb-00e5aa94817d

Co-authored-by: Youw <5939659+Youw@users.noreply.github.com>
Copilot AI changed the title [WIP] Optimize hidapi thread wait to reduce CPU usage Fix callback_thread busy-loop by using proper condition variable pattern Mar 30, 2026
Copilot finished work on behalf of Youw March 30, 2026 09:49
Copilot AI requested a review from Youw March 30, 2026 09:49
@Youw
Copy link
Copy Markdown
Member

Youw commented Mar 30, 2026

@copilot We used to handle all evens in a queue while holding the lock on a mutex, and now we lock it only when manipulating the queue, but not when handling the event. Is there a benefit in that approach?

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Mar 30, 2026

@copilot We used to handle all evens in a queue while holding the lock on a mutex, and now we lock it only when manipulating the queue, but not when handling the event. Is there a be...

Yes — releasing the lock during event processing avoids two problems:

  1. Blocking the producer: hid_libusb_hotplug_callback (which runs on hotplug_thread) must acquire the same mutex to enqueue events. If we held the lock during the (potentially long) callback invocation, the producer would stall for that entire duration. Releasing it lets new events be enqueued concurrently while an older one is being dispatched.

  2. Deadlock prevention: If user callback code calls back into HIDAPI (e.g., hid_hotplug_deregister_callback), that call will attempt to acquire callback_thread.mutex. Holding it during processing would deadlock.

The tradeoff is that hid_hotplug_context.devs (the known-device list maintained inside process_hotplug_event) is modified without the lock, but that is safe because only callback_thread reads or writes devs — events are still processed serially within the single consumer thread.

@Youw
Copy link
Copy Markdown
Member

Youw commented Mar 30, 2026

@copilot I also see that devs is accessed from hid_hotplug_register_callback, even though it is done before the thread is created, maybe move that initialisation inside of the thread (to match cleanup of that variable at the end of the thread).
And would be great to explicitly mark that variable that it should only be accessed from that thread.

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Mar 30, 2026

@copilot I also see that devs is accessed from hid_hotplug_register_callback, even though it is done before the thread is created, maybe move that initialisation inside of the th...

Done in 3f4721e. devs is now initialized at the start of callback_thread (paired with its cleanup at the end), and its struct comment marks it as "Only accessed from callback_thread." The HID_API_HOTPLUG_ENUMERATE case in hid_hotplug_register_callback now enumerates devices locally (freed after use) so it never touches the thread-owned devs.

Copilot AI changed the title Fix callback_thread busy-loop by using proper condition variable pattern Fix callback_thread busy-loop and queue synchronization Mar 30, 2026
Copilot finished work on behalf of Youw March 30, 2026 10:53
Copilot stopped work on behalf of Youw due to an error March 30, 2026 10:54
…ess_hotplug_event

In process_hotplug_event(), the devs list was modified without holding
hid_hotplug_context.mutex. Meanwhile, hid_hotplug_register_callback()
iterates devs under that mutex during HID_API_HOTPLUG_ENUMERATE. A device
disconnect processed by callback_thread could free a devs entry while
the ENUMERATE loop was still traversing it, causing a use-after-free.

Fix by wrapping process_hotplug_event() with hid_hotplug_context.mutex.
The mutex is recursive, so hid_internal_invoke_callbacks() (which also
acquires it) works correctly. This aligns with how the linux backend
handles the same scenario.

Agent-Logs-Url: https://github.com/libusb/hidapi/sessions/cb1dc652-1f96-457d-8319-55e4a6f6e6f3

Co-authored-by: Youw <5939659+Youw@users.noreply.github.com>
Copilot AI changed the title Fix callback_thread busy-loop and queue synchronization libusb: protect devs list with mutex in process_hotplug_event Mar 30, 2026
Copilot finished work on behalf of Youw March 30, 2026 11:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants