Skip to content

some more clean up#100

Merged
batuhan merged 23 commits intomainfrom
batuhan/refactor
Mar 30, 2026
Merged

some more clean up#100
batuhan merged 23 commits intomainfrom
batuhan/refactor

Conversation

@batuhan
Copy link
Copy Markdown
Member

@batuhan batuhan commented Mar 30, 2026

No description provided.

batuhan added 20 commits March 29, 2026 21:28
Replace external SDK prompt models with local PromptContext/PromptMessage/PromptBlock types and remove reliance on bridgesdk. Add prompt_builder, prompt_context_local and prompt_projection_local to centralize prompt assembly, history replay, and current-turn text construction (including deterministic handling of untrusted prefixes and link context). Consolidate history loading into replayHistoryMessages and fetchHistoryRowsWithExtra, update media handling (PDFs normalized into file-context text; audio/video must be preprocessed), and trim/normalize system prompt text passed to Responses. Remove ensureCanonicalUserMessage and canonical_user_messages.go, update many call sites and tests, and adjust default command prefix resolution to use ResolveCommandPrefix. This simplifies prompt lifecycle, reduces cross-package coupling, and standardizes multimodal input conversion.
Centralize agent defaults and refactor prompt construction. Move provider-specific settings into Agents.Defaults (model selections, PDF engine, compaction) and initialize them in runtime defaults; remove the old ProvidersConfig/ProviderConfig fields. Rework tools config shape (introduce tools.web and tools.links) and migrate link-preview/config paths accordingly. Introduce ModelSelectionConfig and use it for default model selection. Consolidate multiple prompt-building helpers into unified turn-based builders (buildCurrentTurnWithLinks, buildMediaTurnContext, buildHeartbeatTurnContext, buildPromptContextForTurn) and update callers across handlers and internal dispatch. Provide default PDF engine fallback ("mistral-ocr") via helper methods. Apply related test updates and remove the example integrations config. These changes simplify configuration, centralize defaults, and reduce duplication in prompt assembly.
Replace direct openai.ChatCompletionMessageParamUnion usage with a new PromptContext/PromptMessage model across the AI bridge. Add prompt_context_ops helpers, conversions between PromptContext and chat messages, and estimatePromptContextTokensForModel. Update compaction, retry, streaming (chat/responses), subagent, pending queue, status text, and memory integration code to operate on PromptContext, including new memory prompt augmentor wiring and steering prompt builders. This centralizes prompt handling, removes many direct OpenAI type dependencies, and adapts runtime compaction/truncation to the new representation.
Replace usages of openai ChatCompletion message types with the internal PromptMessage API (PromptRoleUser/Text()). Update tests to assert Role/Text() and rename buildSteeringUserMessages -> buildSteeringPromptMessages. Remove the promptIntegrationRegistry, corePromptIntegration, and related prompt augmentation/additional message helpers and promptRegistry field; core integrations now only register tool integrations. Switch agent-loop selection and callers to use the *Prompt variants (runChatCompletionsAgentLoopPrompt / runResponsesAgentLoopPrompt) and remove conversions that produced openai.ChatCompletionMessageParamUnion. Adjust imports accordingly.
Replace the unused "slices" import with "strings" in response_retry.go and remove unused OpenAI imports from streaming_chat_completions.go and streaming_executor.go. These changes clean up imports to fix unused-import warnings and do not change runtime behavior.
Use bridgesdk.ResolveCommandPrefix to resolve the configured command prefix with a hardcoded fallback in Codex and DummyBridge connectors (fallbacks: "!ai" and "!dummybridge"). Add tests (TestGetNameUsesDefaultCommandPrefixBeforeStartup) for both bridges to assert the DefaultCommandPrefix value before startup. Files changed: bridges/codex/constructors.go, bridges/dummybridge/connector.go, and tests in bridges/codex/connector_test.go and bridges/dummybridge/connector_test.go.
Introduce LoginCredentials to hold per-login api_key/base_url/service_tokens and migrate UserLoginMetadata to use Credentials. Update accessors, merging/clone helpers, and service token handling across MCP, image generation, media, provisioning, and login code. Rename several prompt conversion and helper functions to unexported variants (e.g. PromptContextToResponsesInput -> promptContextToResponsesInput, ChatMessagesToPromptContext -> chatMessagesToPromptContext) and update all call sites, including provider, streaming, compaction, and logging changes to use PromptContext. Add a test to ensure legacy config sections are removed from example config and remove those sections from the bridge example YAML. Misc: adapt error logging to accept PromptContext, adjust continuation/steering logic to append steering messages into the canonical prompt, and update related tests.
Consolidate and clean up duplicated logic across the AI bridge.

- Add emitHeartbeatFailure to centralize heartbeat failure event emission and replace repeated inlined blocks.
- Introduce callGeminiMediaCapability to unify Google/Gemini media capability calls for image/video and remove duplicated request construction; remove media request type aliases.
- Replace mergeLoginCredentials usage by cloning service tokens and setting credentials nil when empty to avoid in-place merges (uses cloneServiceTokens).
- Remove legacy config upgrade machinery and deprecated mergeServiceTokens/mergeLoginCredentials functions and related configupgrade imports.

These changes reduce duplication, simplify imports, and make media and heartbeat handling more maintainable.
Remove deprecated OpenAI chat provider and related audio helper, wrap background sync call, fix OpenCode streaming logic, and replace room-meta sending with Bot.SendState.

- bridges/ai/provider_openai_chat.go: delete OpenAI chat provider implementation.
- bridges/ai/client.go: remove getAudioFormat helper.
- bridges/openclaw/manager.go: wrap syncSessions invocation in a goroutine that explicitly discards the returned error.
- bridges/opencode/opencode_canonical_stream.go: stop reassigning delivered when appending assistant text (prevents overwriting delivered state).
- helpers.go: replace portal.Internal().SendRoomMeta usage with portal.Bridge.Bot.SendState, return bool success and log failures via zerolog (avoids linting on deprecated internals and uses explicit state API).
- sdk/conversation_state_test.go: adjust test to ignore the boolean return from saveConversationStateToGenericMetadata with a clarifying comment.

These changes consolidate provider removal, address a lint/deprecation concern, improve error visibility for async syncs, and correct streaming/text delivery behavior.
Migrate integration and connector code to stronger typed APIs and generics. Key changes:
- Use integrationruntime.ModuleHooks and concrete integrationruntime types instead of generic any for module registration, scopes and callbacks.
- Update bridgesdk.Config usages to the generic form (e.g. bridgesdk.Config[*AIClient, *Config]) and adjust NewStandardConnectorConfig parameter factories to return concrete pointer types.
- Replace many host/handler signatures to accept concrete types (e.g. *bridgev2.Portal, *PortalMetadata, *dbutil.Database) and remove redundant runtime casts.
- Add PortalMetadata helper accessors (AgentID, CompactionCounter, InternalRoom, ModuleMetaValue, SetModuleMetaValue).
- Improve compaction retry logic by caching token estimates from preflight flush, return an int from the preflight hook, and emit lifecycle events without embedding client references.
- Simplify header handling in media understanding helpers to avoid unnecessary copying.
- Propagate truncated flag into file message XML and simplify XML building.
- Rename and tighten dummybridge session helpers (sessionFromAny -> requireSession) and update related APIs to use typed sessions.
- Adjust tests to use the new generic bridgesdk.NewConversation signatures and other updated APIs.

These changes tighten type safety, reduce runtime type assertions, and prepare the codebase for clearer integration APIs.
Migrate integration and connector code to stronger typed APIs and generics. Key changes:
- Use integrationruntime.ModuleHooks and concrete integrationruntime types instead of generic any for module registration, scopes and callbacks.
- Update bridgesdk.Config usages to the generic form (e.g. bridgesdk.Config[*AIClient, *Config]) and adjust NewStandardConnectorConfig parameter factories to return concrete pointer types.
- Replace many host/handler signatures to accept concrete types (e.g. *bridgev2.Portal, *PortalMetadata, *dbutil.Database) and remove redundant runtime casts.
- Add PortalMetadata helper accessors (AgentID, CompactionCounter, InternalRoom, ModuleMetaValue, SetModuleMetaValue).
- Improve compaction retry logic by caching token estimates from preflight flush, return an int from the preflight hook, and emit lifecycle events without embedding client references.
- Simplify header handling in media understanding helpers to avoid unnecessary copying.
- Propagate truncated flag into file message XML and simplify XML building.
- Rename and tighten dummybridge session helpers (sessionFromAny -> requireSession) and update related APIs to use typed sessions.
- Adjust tests to use the new generic bridgesdk.NewConversation signatures and other updated APIs.

These changes tighten type safety, reduce runtime type assertions, and prepare the codebase for clearer integration APIs.
Replace manual slice-copy patterns (append([]T{}, ...)) with slices.Clone for clarity and to avoid shared backing arrays. Clone opts.prepend, opts.append, and opts.leadingBlocks where used. Simplify trimming of SystemPrompt by assigning TrimSpace result to a variable before checking/adding the system message. Add the "slices" import.
Add snapshot utilities to avoid races when queuing events: introduce pending_event.go with snapshotPendingEvent and deep-clone helpers for event.Content.Raw, and update handlers (debounce, matrix, regenerate, media/text handlers) to use snapshots and cloned raw maps before dispatching/queuing. Add pending_event_test to verify reply-target preservation after source mutation. Refactor builtin tool selection: introduce builtinToolPreset, model preset list, selectToolDefinitionsByName, and make selectedBuiltinToolsForTurn choose tools based on a preset (model vs agent), plus tests and helpers (testBuiltinToolClient, toolDefinitionNames) to cover various web-tool configurations. Also add/update streaming tests to use the new helpers and validate tool lists.
Remove inline follow-up continuation logic from the agent loop and Responses adapter so turns finalize immediately instead of reopening for queued edits. Add filtering to drop PromptBlockThinking from history and introduce PromptMessage.VisibleText() (and internal text(includeThinking)) so visible-only text is used when constructing outbound inputs. Replay/history APIs now support excluding a specific message ID and prepareInboundPromptContext appends replayed history and builds the system prompt accordingly. Tests added/updated for config example sync, prompt history/visibility, and message-to-responses behavior. Also update config.example.yaml with the revised network/tools/agents schema and related defaults.
Do not start a room run when there is pending queue work: add roomHasPendingQueueWork, factor it into queue decision logic, clear roomBusy on interrupt, and only acquire the room when not busy. Add a test to verify new queuing-behind-backlog behavior. For image generation, treat Magic Proxy as not exposing Gemini/OpenRouter image endpoints (return false) and update tests to expect routing to OpenAI and gemini unavailability accordingly.
Regenerated bridges/ai/beeper_models_generated.go with an updated timestamp. The manifest was refreshed: many models were removed, added or renamed, and several model capabilities (API, Supports*, ContextWindow, MaxOutputTokens, AvailableTools) were adjusted. Aliases (e.g. beeper/fast, beeper/reasoning, beeper/smart) were updated to new targets. Updated the allowlist expectations in bridges/ai/beeper_models_manifest_test.go to match the new manifest.
Register the new model "google/gemma-2-27b-it" (Gemma 2 27B). Added model metadata to pkg/ai/beeper_models.json, added the mapping in cmd/generate-models/main.go, and regenerated bridges/ai/beeper_models_generated.go. Model metadata: provider=openrouter, api=openai-completions, context_window=8192, max_output_tokens=2048, and no vision/tool-calling/reasoning/web-search support.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • Large model catalog refresh with updated aliases.
    • PDF text extraction for uploaded PDFs.
  • Improvements

    • New unified prompt context/pipeline for multimodal, tool and continuation handling.
    • Smarter image-generation provider routing and tool-selection presets.
    • Safer credential handling that treats whitespace-only keys as absent and normalizes login credentials.
  • Bug Fixes

    • Edits rejected for model-target rooms.
    • Pending-queue handling refined to avoid race/drain issues.
  • Tests

    • Expanded and updated unit tests across prompts, configs, queue and streaming flows.

Walkthrough

Converts prompt handling to a local PromptContext model, replaces OpenAI/SDK message unions with PromptContext across agent loops and streaming, introduces credential nesting, tightens integration/runtime types, refactors prompt building/history replay/compaction, updates model manifest and config layout, and propagates these changes across many runtime and test files.

Changes

Cohort / File(s) Summary
Prompt model & conversions
bridges/ai/messages.go, bridges/ai/prompt_context_local.go, bridges/ai/prompt_context_ops.go, bridges/ai/prompt_projection_local.go, bridges/ai/prompt_history_test.go
Introduce local PromptContext/PromptMessage/PromptBlock types and conversion helpers (to/from Responses/Chat formats), cloning, data-URL building, and visibility/text helpers; add tests for block filtering and visibility.
Agent loop / streaming / responses
bridges/ai/streaming_executor.go, bridges/ai/streaming_init.go, bridges/ai/streaming_responses_api.go, bridges/ai/streaming_chat_completions.go, bridges/ai/streaming_continuation.go, bridges/ai/streaming_input_conversion.go, bridges/ai/streaming_state.go
Replace OpenAI chat-message unions with PromptContext in streaming and agent-loop paths; remove follow-up/Continue plumbing and baseInput state; continuation and Responses params built from PromptContext; multimodal detection updated to block model.
Prompt building, history & pending events
bridges/ai/prompt_builder.go, bridges/ai/handlematrix.go, bridges/ai/text_files.go, bridges/ai/pending_event.go
Add centralized prompt construction (history replay, current-turn assembly, link context), media/attachment normalization, PDF extraction via pdftotext, and pending-event snapshot helpers used during dispatch.
Retry, compaction & token estimation
bridges/ai/response_retry.go, bridges/ai/compaction_summarization.go, bridges/ai/runtime_compaction_adapter.go
Refactor retry/compaction flows to operate on PromptContext; convert between PromptContext and chat messages for compaction/token estimation; tool-result truncation preserves block structure.
Credentials & login metadata
bridges/ai/metadata.go, bridges/ai/login.go, bridges/ai/login_loaders.go, bridges/ai/mcp_helpers.go, bridges/ai/mcp_servers.go
Introduce LoginCredentials (APIKey, BaseURL, ServiceTokens) and accessor helpers; migrate call sites and tests to nested Credentials instead of top-level login fields and update service-token handling.
Integration host & runtime typing
bridges/ai/integration_host.go, pkg/integrations/runtime/interfaces.go, pkg/integrations/runtime/module_hooks.go
Tighten integration/module APIs to concrete typed signatures (*bridgev2.Portal, Meta interface), remove many untyped host methods, update scopes/events to use typed portal/meta.
Integrations & memory prompt changes
bridges/ai/integrations.go, pkg/integrations/memory/*, bridges/ai/integrations_config.go
Remove PromptIntegration pipeline; add PromptContextText-style interface and refactor memory integration to inject text-based prompt context; rework config schema (providers → models, pruning → agents.defaults.compaction).
Provider, tools & media handling
bridges/ai/provider_openai_responses.go, bridges/ai/image_generation_tool.go, bridges/ai/image_understanding.go, bridges/ai/media_understanding_runner.go
Replace SDK prompt conversions with local helpers, add defensive login/credential checks, adjust provider inference and MagicProxy handling, and remove some audio-specific paths.
Model manifest & config examples
bridges/ai/beeper_models_generated.go, pkg/ai/beeper_models.json, bridges/ai/beeper_models_manifest_test.go, bridges/ai/integrations_example-config.yaml, config.example.yaml, cmd/generate-models/main.go
Large model catalog and alias updates, reorganize example configs to models.providers and agents.defaults.compaction, and make OpenRouter token optional in generator.
Queue, room-run & heartbeat
bridges/ai/pending_queue.go, bridges/ai/room_runs.go, bridges/ai/client.go, bridges/ai/heartbeat_execute.go, bridges/ai/queue_status_test.go
Add per-queue locking and roomHasPendingQueueWork, treat pending queue as room-busy, remove wrapper enqueue method, centralize heartbeat failure emission, and update acquire/dispatch semantics and tests.
Canonicalization & SDK helper removal
bridges/ai/canonical_prompt_messages.go, bridges/ai/canonical_user_messages.go, bridges/ai/messages_responses_input_test.go
Drop dependency on agentremote/sdk prompt helpers; replace with local prompt-message projection and remove canonical-user-message helper calls.
SDK generics & connector typing
sdk/types.go, sdk/client.go, sdk/runtime.go, sdk/connector.go, sdk/connector_helpers.go
Convert SDK Config/client/runtime to generic parameterized forms (session/config-data), update hook signatures and factories, add ResolveCommandPrefix helper, and wire typed meta factories.
Streaming tool selection & presets
bridges/ai/streaming_tool_selection.go, bridges/ai/streaming_tool_selection_test.go, bridges/ai/streaming_request_tools_test.go
Add builtin tool preset classification (none/model/agent) and select fixed model-chat tool lists; update tests and helpers for deterministic comparisons.
Removed/renamed legacy prompt files & tests
multiple files (see diff)
Delete old SDK prompt_projection/prompt_context and turn-validation modules; rename many run/loop functions to accept PromptContext; update many tests for generics, credential shape, model IDs, and new function signatures; add new tests for new helpers.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant User as User/Event
participant AI as AIClient
participant Provider as ModelProvider (Responses/Chat)
participant DB as DB/Queue
User->>AI: incoming Matrix event
AI->>AI: snapshotPendingEvent -> buildPromptContext (replayHistoryMessages + current turn)
AI->>Provider: buildResponsesAgentLoopParams(prompt,SystemPrompt) -> Generate/Stream
Provider-->>AI: streamed tokens/results
AI->>DB: persist messages / update queue state
AI-->>User: finalize/send assistant message

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch batuhan/refactor

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
bridges/ai/desktop_api_sessions.go (1)

161-179: ⚠️ Potential issue | 🟠 Major

Merge the fallback token into an existing default instance.

If desktopDefaultInstance is already present only to override BaseURL, this code keeps that entry and skips the global DesktopAPI token because the key exists. desktopAPIClient() then fails with “desktop API token is not set”. The fallback should also fill an existing default instance whose Token is blank.

💡 Proposed fix
-	if token := strings.TrimSpace(creds.ServiceTokens.DesktopAPI); token != "" {
-		if _, ok := instances[desktopDefaultInstance]; !ok {
-			instances[desktopDefaultInstance] = DesktopAPIInstance{Token: token}
-		}
-	}
+	if token := strings.TrimSpace(creds.ServiceTokens.DesktopAPI); token != "" {
+		instance := instances[desktopDefaultInstance]
+		if strings.TrimSpace(instance.Token) == "" {
+			instance.Token = token
+			instances[desktopDefaultInstance] = instance
+		}
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/desktop_api_sessions.go` around lines 161 - 179, The code collects
DesktopAPI instances from loginCredentials (loginMetadata(oc.UserLogin)) into
instances but when applying the fallback creds.ServiceTokens.DesktopAPI it only
skips if desktopDefaultInstance key exists; update this logic so that when a
non-empty fallback token exists you either create the default entry or merge the
token into an existing DesktopAPIInstance whose Token is empty (leave BaseURL
intact). Locate the block that checks token :=
strings.TrimSpace(creds.ServiceTokens.DesktopAPI) and change it to: if token !=
"" then if instances[desktopDefaultInstance] exists and
instances[desktopDefaultInstance].Token is empty set its Token to token,
otherwise add instances[desktopDefaultInstance] = DesktopAPIInstance{Token:
token}.
bridges/ai/constructors.go (1)

20-79: ⚠️ Potential issue | 🟠 Major

Keep the AI config upgrader in this constructor.

This PR reshapes the AI config surface, but NewAIConnector() no longer wires a ConfigUpgrader. Existing installs will stop auto-migrating older keys and can silently come up with defaults instead of their previous provider/pruning settings.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/constructors.go` around lines 20 - 79, The constructor removed the
ConfigUpgrader wiring so older installs won't auto-migrate AI config keys;
update the bridgesdk.NewStandardConnectorConfig call (oc.sdkConfig) to include a
ConfigUpgrader parameter that runs the AI config upgrader (e.g., set
ConfigUpgrader: oc.configUpgrader or a function that calls the existing AI
config upgrade logic), ensuring the upgrader function references the connector
config (oc.Config) and is invoked during connector init to migrate legacy
provider/pruning keys.
pkg/integrations/memory/integration.go (1)

72-79: ⚠️ Potential issue | 🟠 Major

Resolve the effective agent ID instead of using raw Meta.AgentID().

These sites now bypass agentIDFromEventMeta, so rooms that rely on DefaultAgentID() fall back to "" here. That makes memory tool availability, prompt bootstrap reads, and memory command writes hit the wrong namespace even though the execution path resolves the default agent elsewhere.

🧭 Proposed fix
 	if scope.Meta != nil {
-		agentID := scope.Meta.AgentID()
+		agentID := i.agentIDFromEventMeta(scope.Meta)
 		_, errMsg := i.getManager(agentID)
 		if errMsg != "" {
 			return true, false, iruntime.SourceProviderLimit, errMsg
@@
 	agentID := ""
 	if meta != nil {
-		agentID = meta.AgentID()
+		agentID = i.agentIDFromEventMeta(meta)
 	}
 	content, filePath, found, err := i.host.ReadTextFile(ctx, agentID, path)
@@
 	agentID := ""
 	if scope.Meta != nil {
-		agentID = scope.Meta.AgentID()
+		agentID = i.agentIDFromEventMeta(scope.Meta)
 	}
 	return i.host.WriteTextFile(ctx, scope.Portal, scope.Meta, agentID, mode, path, content, maxBytes)

Also applies to: 298-303, 441-445

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/integrations/memory/integration.go` around lines 72 - 79, Replace direct
use of scope.Meta.AgentID() with the resolver agentIDFromEventMeta(scope.Meta)
so the default-agent fallback is applied; e.g., in the block that calls
getManager(agentID) (and the similar blocks at the other two occurrences around
lines 298-303 and 441-445) compute the effective agent ID via
agentIDFromEventMeta and use that value when calling getManager, returning the
same tuple on error. Ensure you update all three places (the shown block and the
two other occurrences) to call agentIDFromEventMeta instead of reading
Meta.AgentID() directly.
🧹 Nitpick comments (14)
sdk/conversation_state_test.go (1)

82-84: Assert the expected false result instead of discarding it.

The comment documents expected behavior, but the test doesn’t verify it. Please assert that this call returns false so contract changes are caught.

Proposed test tightening
-	// saveConversationStateToGenericMetadata intentionally returns false here
-	// because generic metadata doesn't support the carrier path.
-	_ = saveConversationStateToGenericMetadata(&holder, state)
+	// Generic metadata does not support carrier-backed holders.
+	if ok := saveConversationStateToGenericMetadata(&holder, state); ok {
+		t.Fatalf("expected generic metadata save to report unsupported carrier path")
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/conversation_state_test.go` around lines 82 - 84, The test currently
discards the boolean result of saveConversationStateToGenericMetadata(&holder,
state) even though the comment expects false; change the test to capture and
assert that the call returns false (e.g., result :=
saveConversationStateToGenericMetadata(&holder, state); require.False(t, result)
or assert.False(...)) to lock the contract. Keep the call parameters (holder,
state) and update the assertion style to match the test suite's existing
helpers.
bridges/openclaw/manager.go (1)

596-598: Consider logging the error for consistency and debugging.

This background syncSessions call silently discards the error, while a similar pattern at lines 2147-2150 logs failures at debug level. For consistency and to aid troubleshooting, consider logging the error here as well.

🔍 Suggested improvement for consistency
 	go func() {
-		_ = m.syncSessions(m.client.BackgroundContext(ctx))
+		if err := m.syncSessions(m.client.BackgroundContext(ctx)); err != nil {
+			m.client.Log().Debug().Err(err).Str("session_key", meta.OpenClawSessionKey).Msg("Failed to refresh OpenClaw sessions after synthetic DM message")
+		}
 	}()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/openclaw/manager.go` around lines 596 - 598, The background goroutine
currently discards the error returned by
m.syncSessions(m.client.BackgroundContext(ctx)); change it to capture the error
and log it for consistency with the other call that logs failures (use the same
debug-level logging used elsewhere). Specifically, wrap the call so the returned
error is assigned (err := m.syncSessions(...)) and then call the manager's
logger (e.g., m.logger.Debugf or m.logger.Debug) to emit a descriptive message
including the error when err != nil; keep the call to
m.client.BackgroundContext(ctx) and the goroutine structure unchanged.
pkg/integrations/runtime/helpers.go (1)

9-15: Stale docstring after simplification.

The comment on line 10 mentions "Returns zerolog.Nop() if the underlying logger is not a zerolog.Logger", but this fallback case was removed. The function now only returns Nop() when host == nil.

📝 Suggested docstring update
 // ZerologFromHost extracts a zerolog.Logger from a Host.
-// Returns zerolog.Nop() if the underlying logger is not a zerolog.Logger.
+// Returns zerolog.Nop() if the host is nil.
 func ZerologFromHost(host Host) zerolog.Logger {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/integrations/runtime/helpers.go` around lines 9 - 15, Update the
docstring for ZerologFromHost to reflect its current behavior: it returns
zerolog.Nop() only when the provided host is nil and otherwise returns
host.RawLogger(); remove the stale phrase about falling back when the underlying
logger is not a zerolog.Logger and ensure the comment accurately documents the
nil-only fallback for the ZerologFromHost function.
pkg/agents/presets.go (1)

10-10: Constant name no longer matches the model ID.

ModelZAIGLM47 references GLM 4.7 in its name, but the value is "z-ai/glm-5-turbo". Consider renaming to ModelZAIGLM5Turbo for clarity. This constant is used in 2 additional files (pkg/agents/boss.go:16 and pkg/agents/beeper.go:22), so the rename would require minimal updates.

📝 Suggested rename
-	ModelZAIGLM47     = "z-ai/glm-5-turbo"
+	ModelZAIGLM5Turbo = "z-ai/glm-5-turbo"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/agents/presets.go` at line 10, The constant name ModelZAIGLM47 does not
match its value "z-ai/glm-5-turbo"; rename the constant to ModelZAIGLM5Turbo
(leave the string value unchanged) and update every reference to ModelZAIGLM47
to the new identifier (e.g., the usages in the boss and beeper modules where
that constant is imported/used); ensure the project builds and run tests to
catch any missed references or documentation/comments that mention the old name.
bridges/ai/prompt_projection_local.go (1)

101-129: Add a clarifying comment to document that this function processes only the latest user message.

The function correctly processes only messages[0], and this is intentional: all callers pass a single-element slice via promptTail(promptContext, 1). However, without a comment documenting this design, future maintainers may misinterpret the single-message behavior as a bug. Adding a brief comment like "processes only the latest user message (callers guarantee single-element slice)" would clarify the intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/prompt_projection_local.go` around lines 101 - 129, Add a short
clarifying comment above the function turnDataFromUserPromptMessages stating
that it intentionally processes only messages[0] (the latest user message)
because callers pass a single-element slice (e.g., via promptTail(promptContext,
1)); mention that callers guarantee the single-element invariant so the behavior
is intentional and not a bug.
connector_builder_test.go (1)

17-34: Assert the bridge passed to Init as well.

This test now verifies pointer identity for Start and Stop, but Init still ignores its argument. A regression in the first hook would still pass here.

🎯 Tighten the hook-order test
-		Init: func(*bridgev2.Bridge) { order = append(order, "init") },
+		Init: func(got *bridgev2.Bridge) {
+			if got != wantBridge {
+				t.Fatalf("expected init hook bridge %p, got %p", wantBridge, got)
+			}
+			order = append(order, "init")
+		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@connector_builder_test.go` around lines 17 - 34, The Init hook in the test
doesn't assert the bridge argument; update the ConnectorSpec.Init callback to
verify the passed *bridgev2.Bridge equals wantBridge (like Start/Stop do) and
call t.Fatalf with a descriptive message if it doesn't match so the test asserts
pointer identity for Init as well; locate the NewConnector/ConnectorSpec usage
and change the anonymous Init func to accept the bridge parameter, compare it to
wantBridge, and append "init" only after the check.
bridges/dummybridge/bridge.go (1)

60-62: Drop the no-op validation in onDisconnect.

The requireSession result is ignored here, so this call currently does not enforce anything or perform cleanup.

♻️ Minimal cleanup
-func (dc *DummyBridgeConnector) onDisconnect(session *dummySession) {
-	_, _ = requireSession(session)
+func (dc *DummyBridgeConnector) onDisconnect(_ *dummySession) {
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/dummybridge/bridge.go` around lines 60 - 62, Remove the no-op
validation call in DummyBridgeConnector.onDisconnect: delete the unused
requireSession(session) invocation inside onDisconnect (function onDisconnect,
type DummyBridgeConnector, param *dummySession) so the method no longer calls
requireSession and performs no-op validation; if disconnection cleanup is
required, implement actual cleanup logic here instead of calling requireSession.
bridges/ai/pending_event.go (1)

44-52: Consider whether []byte or other mutable slice types need handling.

The deep-clone correctly handles map[string]any and []any, but the default case shares references to any other types. If evt.Content.Raw can contain []byte slices (e.g., from custom unmarshaling), those would remain shared. For typical JSON-unmarshaled content this is fine since strings are used, but worth confirming if the source event content could include mutable byte slices.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/pending_event.go` around lines 44 - 52, The clonePendingRawValue
function currently clones maps and []any but returns other types by reference;
add handling for mutable byte-slice types so they are deep-copied instead of
shared. Update clonePendingRawValue to detect []byte (and equivalent []uint8)
and return a new copy (e.g., allocate and copy bytes) and consider adding a
generic branch for other mutable slice element types if your events can include
them; keep existing branches for map[string]any -> clonePendingRawMap and []any
-> clonePendingRawSlice and ensure the default still returns immutable values
unchanged.
bridges/ai/error_logging.go (1)

42-59: Keep an audio-specific failure signal.

has_multimodal now groups audio, image, and file requests into one bucket. If audio still follows a distinct provider/code path, losing has_audio will make incident triage much harder.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/error_logging.go` around lines 42 - 59, The change removed an
audio-specific failure signal; update addRequestSummary to preserve a separate
has_audio field in addition to has_multimodal by calling a helper that detects
audio content (e.g., implement or call promptHasAudioContent(prompt)) and adding
event.Bool("has_audio", promptHasAudioContent(prompt)); keep the existing
event.Bool("has_multimodal", promptHasMultimodalContent(prompt)) unchanged so
audio remains separately observable for triage.
bridges/ai/messages_responses_input_test.go (1)

9-50: Keep file-block coverage in the local Responses converter tests.

This rewrite only exercises text and image input, so the newly local promptContextToResponsesInput() path no longer has regression coverage for PromptBlockFile / input_file handling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/messages_responses_input_test.go` around lines 9 - 50, The test
TestPromptContextToResponsesInput_MultimodalUser no longer covers
PromptBlockFile handling; update the test (or add a new one) to include a
PromptBlock{Type: PromptBlockFile, /* file fields */} when calling
promptContextToResponsesInput and assert the returned input contains the
corresponding file part (e.g., check for part.OfInputFile / input_file
representation, filename or URL/value fields) alongside the existing text and
image assertions so file-to-input_file conversion is exercised.
bridges/ai/media_understanding_runner_openai_test.go (1)

59-61: Drive this PDF-engine assertion through the production defaulting path.

The fixture now expects "mistral-ocr" from a manually constructed OpenAIConnector{Config: Config{}}. If that default is injected by NewAIConnector()/runtime defaulting instead of resolveOpenRouterMediaConfig() itself, this test is no longer validating the real behavior. Either call the same defaulting helper here or set the PDF engine explicitly in the fixture.

Also applies to: 100-101

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/media_understanding_runner_openai_test.go` around lines 59 - 61,
The test constructs an OpenAIConnector with an empty Config which bypasses the
production defaulting path; update the fixture so it either uses the same
defaulting helper or sets the PDF engine explicitly: replace the manual
OpenAIConnector{Config: Config{}} in the newMediaTestClient call with a
connector produced by NewAIConnector() (or call the runtime defaulting helper
used by resolveOpenRouterMediaConfig()), or directly set Config.PDFEngine =
"mistral-ocr" on the OpenAIConnector so the test reflects real behavior; ensure
this change is applied wherever the empty-config fixture is used (including the
other occurrences around lines 100-101).
bridges/ai/text_files.go (1)

199-234: External dependency on pdftotext should be documented.

The function relies on the pdftotext binary being available in PATH. Consider:

  1. Documenting this dependency in deployment/setup docs
  2. Adding a more descriptive error message when pdftotext is not found (e.g., checking for exec.ErrNotFound)
💡 Optional: Improve error message for missing pdftotext
 	cmd := exec.CommandContext(ctx, "pdftotext", "-layout", "-enc", "UTF-8", inputPath, "-")
 	output, err := cmd.Output()
 	if err != nil {
+		if errors.Is(err, exec.ErrNotFound) {
+			return "", false, fmt.Errorf("pdftotext not installed: %w", err)
+		}
 		if exitErr, ok := err.(*exec.ExitError); ok {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/text_files.go` around lines 199 - 234, Document that the pdftotext
binary (poppler) must be installed and on PATH (e.g., README/deployment or setup
docs), and update downloadPDFFile to detect the binary-missing case: when
exec.CommandContext(...).Output() returns exec.ErrNotFound (or the underlying
*exec.ExitError that wraps ErrNotFound), return a clear, actionable error
message that mentions "pdftotext not found" and suggests installing
poppler-utils or adding it to PATH; keep the existing fallback behavior for
other pdftotext errors (preserve the stderr-based message and wrapping with
fmt.Errorf).
bridges/ai/integrations.go (1)

536-546: Unused parameter could be removed or documented.

The login *bridgev2.UserLogin parameter is explicitly ignored (_). If this signature is required for interface compliance or future use, consider adding a brief comment explaining why. Otherwise, the parameter could be removed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/integrations.go` around lines 536 - 546, The function
purgeLoginIntegrations has an unused parameter (the second arg currently written
as _ *bridgev2.UserLogin); either remove that parameter from the signature if
not required for interface compatibility, or keep it and add a short comment
above the function explaining why it is intentionally unused (e.g., needed for
interface compliance or planned future use). Update any callers or the interface
definition accordingly if you remove it, or add the comment referencing
purgeLoginIntegrations to avoid linter warnings.
config.example.yaml (1)

186-213: Consider documenting compaction mode options.

The compaction section has many tuning knobs. Consider adding inline comments explaining what each mode does (e.g., what "cache-ttl" vs other modes means for compaction.mode, what "safeguard" vs other options means for compaction_mode).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config.example.yaml` around lines 186 - 213, Add concise inline comments in
config.example.yaml around the compaction block explaining the valid values and
behavior differences for compaction.mode (e.g., "cache-ttl" vs other modes),
compaction_mode (e.g., "safeguard" vs alternatives), and how they interact with
key fields like ttl, enabled, soft_trim_ratio, hard_clear_ratio,
hard_clear_enabled, keep_last_assistants, min_prunable_chars, soft_trim_*
settings, summarization_enabled/summarization_model/max_summary_tokens,
keep_recent_tokens/reserve_tokens/reserve_tokens_floor, and overflow_flush
settings; place the comments directly above or beside the keys (compaction.mode,
compaction_mode, ttl, summarization_enabled, overflow_flush) describing expected
effects, defaults, and when to use each mode so users can choose appropriate
values without external docs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bridges/ai/image_generation_tool.go`:
- Line 208: The code accesses loginMeta.Provider without verifying loginMeta is
non-nil; after calling loginMetadata(btc.Client.UserLogin) ensure you check for
a nil return and handle it (e.g., return an error or log and abort) before
reaching the switch on loginMeta.Provider; update the logic around the call site
(loginMetadata, the variable loginMeta, and the switch loginMeta.Provider) to
safely handle a nil loginMeta and avoid panics.

In `@bridges/ai/integrations_config.go`:
- Around line 447-451: The Provider lookup only normalizes the input name but
not the map keys, so entries like "OpenAI:" in YAML won't match; update
ModelsConfig.Provider to perform case-insensitive key matching by normalizing
both sides: compute normalized := strings.ToLower(strings.TrimSpace(name)) and
then either (a) iterate over cfg.Providers and return the value when
strings.ToLower(strings.TrimSpace(key)) == normalized (use this if you want a
minimal change), or (b) better, populate a normalizedProviders map (with
lowercased trimmed keys) when parsing/loading the config and return
cfg.normalizedProviders[normalized]; reference the ModelsConfig type and
Provider(name string) ModelProviderConfig method when applying the change.

In `@bridges/ai/magic_proxy_test.go`:
- Around line 42-45: Tests for resolveServiceConfig in magic_proxy_test.go
currently assert Credentials.BaseURL but omit verifying Credentials.APIKey;
update the relevant test cases that construct &LoginCredentials{APIKey: "tok",
BaseURL: "..."} (both the case around lines 42-45 and the similar case at 68-71)
to also assert that the resolved config contains the nested API key (e.g.,
compare resolved.Credentials.APIKey or equivalent field) so resolveServiceConfig
is validated to read Credentials.APIKey as well as BaseURL.

In `@bridges/ai/prompt_builder.go`:
- Around line 227-231: The code currently appends the text PromptBlock before
the image/attachment "leadingBlocks", reversing the intended order; change the
assembly so leadingBlocks are added first and the optional text block
(PromptBlock{Type: PromptBlockText, Text: text}) is appended after them.
Concretely, in the PromptBuilder code that creates blocks (variables: blocks,
leadingBlocks, PromptBlock, PromptBlockText, text), initialize or append
leadingBlocks into blocks first and then, if strings.TrimSpace(text) != "",
append the text block so multimodal turns preserve the original leading block
order.

In `@bridges/ai/prompt_context_local.go`:
- Around line 204-214: The Chat Completions branch in PromptBlockToolCall is
passing an empty string for ToolCallArguments which differs from
promptMessageToResponsesInputs that normalizes blank ToolCallArguments to "{}",
so update the PromptBlockToolCall handling to normalize empty or whitespace-only
block.ToolCallArguments to "{}" before constructing the
openai.ChatCompletionMessageFunctionToolCallFunctionParam (referencing
PromptBlockToolCall and
openai.ChatCompletionMessageFunctionToolCallFunctionParam) so both serializers
produce the same representation for empty-arg tool calls.

In `@bridges/ai/response_retry.go`:
- Around line 461-476: The current replacement out.Messages[i].Blocks =
[]PromptBlock{{Type: PromptBlockText, Text: trimmed}} discards the original
multi-block structure for tool results; instead, preserve msg.Blocks by
iterating the original blocks (out.Messages[i].Blocks or msg.Blocks) and
"consume" the trimmed string into each block's Text field in order until trimmed
is exhausted, updating each PromptBlock.Text and leaving block Type intact, and
then truncate/remove any remaining original blocks beyond the consumed content;
use ClonePromptContext, PromptRoleToolResult, msg.Text(),
airuntime.SoftTrimToolResult and out.Messages[i].Blocks to locate and implement
this logic so structure and semantics are preserved.

In `@bridges/ai/streaming_responses_api.go`:
- Around line 118-120: The continuation guard currently exits early when both
state.pendingFunctionOutputs and state.pendingMcpApprovals are empty, which
drops steering prompts queued by processResponseStreamEvent via
addPendingSteeringPrompts; update the guard condition to also check
len(state.pendingSteeringPrompts) so it only returns when all three are empty
(i.e., require pendingFunctionOutputs==0 && pendingMcpApprovals==0 &&
pendingSteeringPrompts==0), ensuring buildContinuationParams() is invoked to
consume and send queued steering prompts.

In `@helpers.go`:
- Around line 459-467: The call to portal.Bridge.Bot.SendState dereferences
portal.Bridge.Bot without checking for nil which can panic if the bot is not
configured; add a nil check for portal.Bridge.Bot before calling SendState
(mirror the existing pattern used where login.Bridge.Bot != nil), and if nil
return false (or skip SendState) so the block around portal.MXID,
matrixevents.AIRoomInfoEventType and the event.Content creation is only executed
when portal.Bridge.Bot is non-nil.

In `@pkg/integrations/memory/integration.go`:
- Around line 235-249: AlreadyFlushed currently dereferences call.Meta without
guarding for nil and will panic if hooks run without metadata; add an early
nil-check (if call.Meta == nil) and return false before calling
call.Meta.ModuleMetaValue, and apply the same nil-guard pattern to
shouldBootstrapMemoryPromptContext (which also reads ModuleMetaValue) so both
functions safely no-op when Meta is missing; leave MarkFlushed's existing
nil-check intact.

In `@sdk/connector_helpers.go`:
- Around line 42-46: ResolveCommandPrefix currently trims prefix only for
emptiness checking, causing whitespace-padded prefixes like " /" to be returned
unchanged; update ResolveCommandPrefix to compute a trimmed value (e.g., t :=
strings.TrimSpace(prefix)) and return t when non-empty, otherwise return
fallback so callers get the trimmed prefix rather than one with surrounding
whitespace.

---

Outside diff comments:
In `@bridges/ai/constructors.go`:
- Around line 20-79: The constructor removed the ConfigUpgrader wiring so older
installs won't auto-migrate AI config keys; update the
bridgesdk.NewStandardConnectorConfig call (oc.sdkConfig) to include a
ConfigUpgrader parameter that runs the AI config upgrader (e.g., set
ConfigUpgrader: oc.configUpgrader or a function that calls the existing AI
config upgrade logic), ensuring the upgrader function references the connector
config (oc.Config) and is invoked during connector init to migrate legacy
provider/pruning keys.

In `@bridges/ai/desktop_api_sessions.go`:
- Around line 161-179: The code collects DesktopAPI instances from
loginCredentials (loginMetadata(oc.UserLogin)) into instances but when applying
the fallback creds.ServiceTokens.DesktopAPI it only skips if
desktopDefaultInstance key exists; update this logic so that when a non-empty
fallback token exists you either create the default entry or merge the token
into an existing DesktopAPIInstance whose Token is empty (leave BaseURL intact).
Locate the block that checks token :=
strings.TrimSpace(creds.ServiceTokens.DesktopAPI) and change it to: if token !=
"" then if instances[desktopDefaultInstance] exists and
instances[desktopDefaultInstance].Token is empty set its Token to token,
otherwise add instances[desktopDefaultInstance] = DesktopAPIInstance{Token:
token}.

In `@pkg/integrations/memory/integration.go`:
- Around line 72-79: Replace direct use of scope.Meta.AgentID() with the
resolver agentIDFromEventMeta(scope.Meta) so the default-agent fallback is
applied; e.g., in the block that calls getManager(agentID) (and the similar
blocks at the other two occurrences around lines 298-303 and 441-445) compute
the effective agent ID via agentIDFromEventMeta and use that value when calling
getManager, returning the same tuple on error. Ensure you update all three
places (the shown block and the two other occurrences) to call
agentIDFromEventMeta instead of reading Meta.AgentID() directly.

---

Nitpick comments:
In `@bridges/ai/error_logging.go`:
- Around line 42-59: The change removed an audio-specific failure signal; update
addRequestSummary to preserve a separate has_audio field in addition to
has_multimodal by calling a helper that detects audio content (e.g., implement
or call promptHasAudioContent(prompt)) and adding event.Bool("has_audio",
promptHasAudioContent(prompt)); keep the existing event.Bool("has_multimodal",
promptHasMultimodalContent(prompt)) unchanged so audio remains separately
observable for triage.

In `@bridges/ai/integrations.go`:
- Around line 536-546: The function purgeLoginIntegrations has an unused
parameter (the second arg currently written as _ *bridgev2.UserLogin); either
remove that parameter from the signature if not required for interface
compatibility, or keep it and add a short comment above the function explaining
why it is intentionally unused (e.g., needed for interface compliance or planned
future use). Update any callers or the interface definition accordingly if you
remove it, or add the comment referencing purgeLoginIntegrations to avoid linter
warnings.

In `@bridges/ai/media_understanding_runner_openai_test.go`:
- Around line 59-61: The test constructs an OpenAIConnector with an empty Config
which bypasses the production defaulting path; update the fixture so it either
uses the same defaulting helper or sets the PDF engine explicitly: replace the
manual OpenAIConnector{Config: Config{}} in the newMediaTestClient call with a
connector produced by NewAIConnector() (or call the runtime defaulting helper
used by resolveOpenRouterMediaConfig()), or directly set Config.PDFEngine =
"mistral-ocr" on the OpenAIConnector so the test reflects real behavior; ensure
this change is applied wherever the empty-config fixture is used (including the
other occurrences around lines 100-101).

In `@bridges/ai/messages_responses_input_test.go`:
- Around line 9-50: The test TestPromptContextToResponsesInput_MultimodalUser no
longer covers PromptBlockFile handling; update the test (or add a new one) to
include a PromptBlock{Type: PromptBlockFile, /* file fields */} when calling
promptContextToResponsesInput and assert the returned input contains the
corresponding file part (e.g., check for part.OfInputFile / input_file
representation, filename or URL/value fields) alongside the existing text and
image assertions so file-to-input_file conversion is exercised.

In `@bridges/ai/pending_event.go`:
- Around line 44-52: The clonePendingRawValue function currently clones maps and
[]any but returns other types by reference; add handling for mutable byte-slice
types so they are deep-copied instead of shared. Update clonePendingRawValue to
detect []byte (and equivalent []uint8) and return a new copy (e.g., allocate and
copy bytes) and consider adding a generic branch for other mutable slice element
types if your events can include them; keep existing branches for map[string]any
-> clonePendingRawMap and []any -> clonePendingRawSlice and ensure the default
still returns immutable values unchanged.

In `@bridges/ai/prompt_projection_local.go`:
- Around line 101-129: Add a short clarifying comment above the function
turnDataFromUserPromptMessages stating that it intentionally processes only
messages[0] (the latest user message) because callers pass a single-element
slice (e.g., via promptTail(promptContext, 1)); mention that callers guarantee
the single-element invariant so the behavior is intentional and not a bug.

In `@bridges/ai/text_files.go`:
- Around line 199-234: Document that the pdftotext binary (poppler) must be
installed and on PATH (e.g., README/deployment or setup docs), and update
downloadPDFFile to detect the binary-missing case: when
exec.CommandContext(...).Output() returns exec.ErrNotFound (or the underlying
*exec.ExitError that wraps ErrNotFound), return a clear, actionable error
message that mentions "pdftotext not found" and suggests installing
poppler-utils or adding it to PATH; keep the existing fallback behavior for
other pdftotext errors (preserve the stderr-based message and wrapping with
fmt.Errorf).

In `@bridges/dummybridge/bridge.go`:
- Around line 60-62: Remove the no-op validation call in
DummyBridgeConnector.onDisconnect: delete the unused requireSession(session)
invocation inside onDisconnect (function onDisconnect, type
DummyBridgeConnector, param *dummySession) so the method no longer calls
requireSession and performs no-op validation; if disconnection cleanup is
required, implement actual cleanup logic here instead of calling requireSession.

In `@bridges/openclaw/manager.go`:
- Around line 596-598: The background goroutine currently discards the error
returned by m.syncSessions(m.client.BackgroundContext(ctx)); change it to
capture the error and log it for consistency with the other call that logs
failures (use the same debug-level logging used elsewhere). Specifically, wrap
the call so the returned error is assigned (err := m.syncSessions(...)) and then
call the manager's logger (e.g., m.logger.Debugf or m.logger.Debug) to emit a
descriptive message including the error when err != nil; keep the call to
m.client.BackgroundContext(ctx) and the goroutine structure unchanged.

In `@config.example.yaml`:
- Around line 186-213: Add concise inline comments in config.example.yaml around
the compaction block explaining the valid values and behavior differences for
compaction.mode (e.g., "cache-ttl" vs other modes), compaction_mode (e.g.,
"safeguard" vs alternatives), and how they interact with key fields like ttl,
enabled, soft_trim_ratio, hard_clear_ratio, hard_clear_enabled,
keep_last_assistants, min_prunable_chars, soft_trim_* settings,
summarization_enabled/summarization_model/max_summary_tokens,
keep_recent_tokens/reserve_tokens/reserve_tokens_floor, and overflow_flush
settings; place the comments directly above or beside the keys (compaction.mode,
compaction_mode, ttl, summarization_enabled, overflow_flush) describing expected
effects, defaults, and when to use each mode so users can choose appropriate
values without external docs.

In `@connector_builder_test.go`:
- Around line 17-34: The Init hook in the test doesn't assert the bridge
argument; update the ConnectorSpec.Init callback to verify the passed
*bridgev2.Bridge equals wantBridge (like Start/Stop do) and call t.Fatalf with a
descriptive message if it doesn't match so the test asserts pointer identity for
Init as well; locate the NewConnector/ConnectorSpec usage and change the
anonymous Init func to accept the bridge parameter, compare it to wantBridge,
and append "init" only after the check.

In `@pkg/agents/presets.go`:
- Line 10: The constant name ModelZAIGLM47 does not match its value
"z-ai/glm-5-turbo"; rename the constant to ModelZAIGLM5Turbo (leave the string
value unchanged) and update every reference to ModelZAIGLM47 to the new
identifier (e.g., the usages in the boss and beeper modules where that constant
is imported/used); ensure the project builds and run tests to catch any missed
references or documentation/comments that mention the old name.

In `@pkg/integrations/runtime/helpers.go`:
- Around line 9-15: Update the docstring for ZerologFromHost to reflect its
current behavior: it returns zerolog.Nop() only when the provided host is nil
and otherwise returns host.RawLogger(); remove the stale phrase about falling
back when the underlying logger is not a zerolog.Logger and ensure the comment
accurately documents the nil-only fallback for the ZerologFromHost function.

In `@sdk/conversation_state_test.go`:
- Around line 82-84: The test currently discards the boolean result of
saveConversationStateToGenericMetadata(&holder, state) even though the comment
expects false; change the test to capture and assert that the call returns false
(e.g., result := saveConversationStateToGenericMetadata(&holder, state);
require.False(t, result) or assert.False(...)) to lock the contract. Keep the
call parameters (holder, state) and update the assertion style to match the test
suite's existing helpers.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 64dd2441-507a-4ce0-bf70-4eecfca6cd53

📥 Commits

Reviewing files that changed from the base of the PR and between 1a30f92 and 712e442.

📒 Files selected for processing (152)
  • bridges/ai/agent_loop_request_builders.go
  • bridges/ai/agent_loop_request_builders_test.go
  • bridges/ai/agent_loop_routing_test.go
  • bridges/ai/agent_loop_steering_test.go
  • bridges/ai/agent_loop_test.go
  • bridges/ai/beeper_models_generated.go
  • bridges/ai/beeper_models_manifest_test.go
  • bridges/ai/canonical_prompt_messages.go
  • bridges/ai/canonical_user_messages.go
  • bridges/ai/chat.go
  • bridges/ai/chat_login_redirect_test.go
  • bridges/ai/client.go
  • bridges/ai/client_capabilities_test.go
  • bridges/ai/command_registry.go
  • bridges/ai/compaction_summarization.go
  • bridges/ai/config_example_sync_test.go
  • bridges/ai/connector.go
  • bridges/ai/constructors.go
  • bridges/ai/constructors_test.go
  • bridges/ai/desktop_api_sessions.go
  • bridges/ai/error_logging.go
  • bridges/ai/handlematrix.go
  • bridges/ai/handlematrix_edit_test.go
  • bridges/ai/heartbeat_execute.go
  • bridges/ai/identifiers.go
  • bridges/ai/image_generation_tool.go
  • bridges/ai/image_generation_tool_magic_proxy_test.go
  • bridges/ai/image_understanding.go
  • bridges/ai/integration_host.go
  • bridges/ai/integrations.go
  • bridges/ai/integrations_config.go
  • bridges/ai/integrations_example-config.yaml
  • bridges/ai/integrations_example_config_test.go
  • bridges/ai/integrations_test.go
  • bridges/ai/internal_dispatch.go
  • bridges/ai/login.go
  • bridges/ai/login_loaders.go
  • bridges/ai/login_loaders_test.go
  • bridges/ai/magic_proxy_test.go
  • bridges/ai/mcp_helpers.go
  • bridges/ai/mcp_servers.go
  • bridges/ai/mcp_servers_test.go
  • bridges/ai/media_understanding_providers.go
  • bridges/ai/media_understanding_runner.go
  • bridges/ai/media_understanding_runner_openai_test.go
  • bridges/ai/messages.go
  • bridges/ai/messages_responses_input_test.go
  • bridges/ai/metadata.go
  • bridges/ai/model_catalog_test.go
  • bridges/ai/pending_event.go
  • bridges/ai/pending_event_test.go
  • bridges/ai/pending_queue.go
  • bridges/ai/prompt_builder.go
  • bridges/ai/prompt_context_local.go
  • bridges/ai/prompt_context_ops.go
  • bridges/ai/prompt_history_test.go
  • bridges/ai/prompt_projection_local.go
  • bridges/ai/provider_openai_chat.go
  • bridges/ai/provider_openai_responses.go
  • bridges/ai/provider_openai_responses_test.go
  • bridges/ai/provisioning.go
  • bridges/ai/queue_status_test.go
  • bridges/ai/response_finalization_test.go
  • bridges/ai/response_retry.go
  • bridges/ai/response_retry_test.go
  • bridges/ai/runtime_compaction_adapter.go
  • bridges/ai/runtime_defaults_test.go
  • bridges/ai/session_greeting.go
  • bridges/ai/session_greeting_test.go
  • bridges/ai/status_text.go
  • bridges/ai/streaming_chat_completions.go
  • bridges/ai/streaming_continuation.go
  • bridges/ai/streaming_error_handling_test.go
  • bridges/ai/streaming_executor.go
  • bridges/ai/streaming_init.go
  • bridges/ai/streaming_init_test.go
  • bridges/ai/streaming_input_conversion.go
  • bridges/ai/streaming_output_items_test.go
  • bridges/ai/streaming_request_tools_test.go
  • bridges/ai/streaming_responses_api.go
  • bridges/ai/streaming_responses_input_test.go
  • bridges/ai/streaming_state.go
  • bridges/ai/streaming_tool_selection.go
  • bridges/ai/streaming_tool_selection_test.go
  • bridges/ai/streaming_ui_tools_test.go
  • bridges/ai/subagent_announce.go
  • bridges/ai/subagent_spawn.go
  • bridges/ai/system_prompts.go
  • bridges/ai/text_files.go
  • bridges/ai/token_resolver.go
  • bridges/ai/tool_availability_configured_test.go
  • bridges/ai/tool_configured.go
  • bridges/ai/tools.go
  • bridges/ai/tools_analyze_image.go
  • bridges/ai/tools_search_fetch.go
  • bridges/ai/tools_search_fetch_test.go
  • bridges/ai/tools_tts_test.go
  • bridges/ai/turn_validation.go
  • bridges/ai/turn_validation_test.go
  • bridges/codex/connector.go
  • bridges/codex/connector_test.go
  • bridges/codex/constructors.go
  • bridges/codex/stream_mapping_test.go
  • bridges/dummybridge/bridge.go
  • bridges/dummybridge/connector.go
  • bridges/dummybridge/connector_test.go
  • bridges/dummybridge/runtime.go
  • bridges/dummybridge/runtime_test.go
  • bridges/openclaw/connector.go
  • bridges/openclaw/manager.go
  • bridges/openclaw/stream_test.go
  • bridges/opencode/connector.go
  • bridges/opencode/opencode_canonical_stream.go
  • cmd/generate-models/main.go
  • config.example.yaml
  • connector_builder.go
  • connector_builder_test.go
  • generate-models.sh
  • helpers.go
  • pkg/agents/presets.go
  • pkg/ai/beeper_models.json
  • pkg/connector/integrations_example-config.yaml
  • pkg/integrations/cron/integration.go
  • pkg/integrations/memory/integration.go
  • pkg/integrations/memory/manager.go
  • pkg/integrations/memory/module_exec.go
  • pkg/integrations/memory/overflow_exec.go
  • pkg/integrations/memory/prompt_exec.go
  • pkg/integrations/memory/sessions.go
  • pkg/integrations/runtime/helpers.go
  • pkg/integrations/runtime/host_types.go
  • pkg/integrations/runtime/interfaces.go
  • pkg/integrations/runtime/module_hooks.go
  • sdk/client.go
  • sdk/client_resolution_test.go
  • sdk/commands.go
  • sdk/connector.go
  • sdk/connector_helpers.go
  • sdk/connector_hooks_test.go
  • sdk/conversation.go
  • sdk/conversation_state_test.go
  • sdk/conversation_test.go
  • sdk/login_handle.go
  • sdk/part_apply_test.go
  • sdk/prompt_context.go
  • sdk/prompt_projection.go
  • sdk/runtime.go
  • sdk/turn.go
  • sdk/turn_data_test.go
  • sdk/turn_snapshot.go
  • sdk/turn_test.go
  • sdk/types.go
💤 Files with no reviewable changes (14)
  • bridges/ai/command_registry.go
  • bridges/ai/canonical_user_messages.go
  • bridges/ai/streaming_state.go
  • bridges/ai/media_understanding_providers.go
  • pkg/integrations/memory/module_exec.go
  • sdk/turn_snapshot.go
  • bridges/opencode/opencode_canonical_stream.go
  • pkg/connector/integrations_example-config.yaml
  • sdk/turn_data_test.go
  • bridges/ai/turn_validation_test.go
  • bridges/ai/provider_openai_chat.go
  • bridges/ai/turn_validation.go
  • sdk/prompt_projection.go
  • sdk/prompt_context.go
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Lint
  • GitHub Check: build-docker
  • GitHub Check: build-agentremote-docker (arm64)
  • GitHub Check: build-agentremote-docker (amd64)
  • GitHub Check: build-docker
  • GitHub Check: Lint
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-03-16T09:01:24.464Z
Learnt from: batuhan
Repo: beeper/agentremote PR: 71
File: bridges/ai/connector.go:53-63
Timestamp: 2026-03-16T09:01:24.464Z
Learning: In package ai, the AI connector’s configuration type (Config) defines Bridge as a value field of type BridgeConfig (not a pointer). Therefore, accessing oc.Config.Bridge.CommandPrefix in OpenAIConnector.applyRuntimeDefaults (bridges/ai/connector.go) is safe and does not require a nil-check.

Applied to files:

  • bridges/ai/constructors_test.go
  • bridges/ai/response_retry_test.go
  • bridges/ai/chat.go
  • bridges/ai/runtime_compaction_adapter.go
  • bridges/dummybridge/runtime_test.go
  • bridges/codex/connector_test.go
  • bridges/ai/tools_tts_test.go
  • bridges/ai/tools_analyze_image.go
  • bridges/dummybridge/connector_test.go
  • sdk/login_handle.go
  • bridges/ai/tool_configured.go
  • bridges/dummybridge/bridge.go
  • bridges/ai/tool_availability_configured_test.go
  • bridges/ai/runtime_defaults_test.go
  • bridges/ai/constructors.go
  • bridges/codex/connector.go
  • bridges/ai/connector.go
  • sdk/connector_hooks_test.go
  • bridges/codex/constructors.go
  • bridges/ai/session_greeting.go
  • sdk/connector.go
  • bridges/dummybridge/connector.go
  • connector_builder.go
  • bridges/opencode/connector.go
  • bridges/ai/media_understanding_runner_openai_test.go
  • bridges/ai/integrations.go
  • bridges/openclaw/connector.go
  • bridges/ai/pending_queue.go
  • bridges/ai/subagent_announce.go
  • pkg/integrations/runtime/interfaces.go
  • bridges/ai/token_resolver.go
  • sdk/connector_helpers.go
  • bridges/ai/integrations_config.go
  • bridges/ai/client.go
  • bridges/ai/integration_host.go

Add getFollowUpMessagesForTest in agent_loop_steering_test.go and update tests to call it instead of the AIClient method. Remove the oc.getFollowUpMessages implementation from pending_queue.go and delete the unused registerRoomRunPendingItem wrapper in room_runs.go (keep the locked registerRoomRunPendingItemLocked). These changes reduce API surface on AIClient and consolidate test-only logic into the test file.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bridges/ai/agent_loop_steering_test.go`:
- Around line 289-299: Add a second subcase to the test that calls
oc.buildContinuationParams with a nil PromptContext in addition to the existing
&PromptContext{} case so the nil-guard branch in buildContinuationParams is
exercised; for the new case, assert that params.Input.OfInputItemList contains
the stored steering prompt, that state.consumePendingSteeringPrompts() returns
an empty slice, and that the canonical prompt persistence check
(len(prompt.Messages) behavior) is validated appropriately (use the nil-aware
expectation since prompt will be nil). Reference buildContinuationParams,
PromptContext, and consumePendingSteeringPrompts when locating where to add the
new nil test case.

In `@bridges/ai/pending_queue.go`:
- Around line 179-190: The predicate in roomHasPendingQueueWork is racy because
it reads queue.items and queue.droppedCount under pendingQueuesMu while
enqueuePendingItem/getPendingQueue/drain mutate those same fields after
pendingQueuesMu is released; fix by introducing and using a per-queue mutex
(e.g., add mu on type pendingQueue) and always lock that mutex whenever reading
or writing queue.draining, queue.items, or queue.droppedCount. Concretely: keep
pendingQueuesMu only for map access in getPendingQueue/roomHasPendingQueueWork,
then acquire queue.mu before inspecting or modifying queue fields in
roomHasPendingQueueWork, enqueuePendingItem, and drainPendingQueue (or
alternatively move all mutations back under pendingQueuesMu consistently) so one
synchronization strategy protects queue state.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ac728c22-099b-4f1f-8ee7-ac1b27a5da6d

📥 Commits

Reviewing files that changed from the base of the PR and between 712e442 and e60b341.

📒 Files selected for processing (3)
  • bridges/ai/agent_loop_steering_test.go
  • bridges/ai/pending_queue.go
  • bridges/ai/room_runs.go
💤 Files with no reviewable changes (1)
  • bridges/ai/room_runs.go
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: build-agentremote-docker (amd64)
  • GitHub Check: build-agentremote-docker (arm64)
  • GitHub Check: build-docker
  • GitHub Check: Lint
  • GitHub Check: Lint
  • GitHub Check: build-docker
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-03-16T09:01:24.464Z
Learnt from: batuhan
Repo: beeper/agentremote PR: 71
File: bridges/ai/connector.go:53-63
Timestamp: 2026-03-16T09:01:24.464Z
Learning: In package ai, the AI connector’s configuration type (Config) defines Bridge as a value field of type BridgeConfig (not a pointer). Therefore, accessing oc.Config.Bridge.CommandPrefix in OpenAIConnector.applyRuntimeDefaults (bridges/ai/connector.go) is safe and does not require a nil-check.

Applied to files:

  • bridges/ai/pending_queue.go
🔇 Additional comments (3)
bridges/ai/pending_queue.go (1)

339-350: Nice move to bridge-owned prompt messages.

This keeps the trim/empty filtering in one place and returns the exact PromptMessage shape that PromptContext.Messages expects.

bridges/ai/agent_loop_steering_test.go (2)

12-32: Good test seam for follow-up prompts.

This helper reuses takePendingQueueDispatchCandidate and preparePendingQueueDispatchCandidate, so the follow-up assertions stay aligned with the production queue semantics after the SDK-specific helper removal.


79-87: The PromptMessage-level assertions look solid.

These cases now validate the bridge contract (Role/Text()) and the consume/drain behavior directly, without coupling the tests to any provider-specific message encoding.

Also applies to: 104-105, 127-132, 153-161, 185-190, 210-226, 244-244, 266-266

Several robustness fixes, behavior tweaks, and tests across the AI bridge code paths:

- Merge fallback DesktopAPI token into the default instance when default has no token and add unit test.
- Add nil checks and clearer errors in image generation paths (provider resolution and OpenAI images base URL) and add tests for missing login metadata.
- Normalize provider lookup in ModelsConfig.Provider to match keys case/whitespace and add a unit test.
- Preserve block ordering in prompt building and normalize blank tool-call arguments to "{}"; add test.
- Clone []byte values when cloning pending event raw values.
- Implement rewriting of trimmed tool-result blocks to preserve block structure when truncating oversized tool results; add test.
- Include pendingSteeringPrompts in agent loop/turn completion checks.
- Improve pdftotext error reporting when the tool is missing.
- Make dummybridge onDisconnect a no-op with an unused param.
- Log errors when OpenClaw session sync fails (instead of silently ignoring the error).
- Expand example config with compaction/overflow docs and defaults.
- Fix connector init hook test to assert the passed bridge pointer and add test for ResolveCommandPrefix trimming behavior.
- Tighten SendAIRoomInfo to require portal.Bridge and portal.Bridge.Bot.
- Rename ZAIGLM model constant and update agent presets and fallbacks to use the new constant.
- Consolidate agent ID extraction in memory integration (use agentIDFromEventMeta) and add nil guards for meta in several places.
- Make ZerologFromHost return Nop when host is nil and trim configured command prefix before returning.
- Minor test additions and adjustments across multiple AI bridge tests and helpers.

These changes aim to reduce panics on nil inputs, preserve data shapes when trimming content, and increase test coverage for edge cases.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
bridges/ai/prompt_context_local.go (3)

126-126: Unused parameter supportsVideoURL.

The supportsVideoURL parameter is declared but never used in the function body. Either remove it or implement the video URL handling logic if it's intended for future use.

Suggested fix (if not needed)
-func promptContextToChatCompletionMessages(ctx PromptContext, supportsVideoURL bool) []openai.ChatCompletionMessageParamUnion {
+func promptContextToChatCompletionMessages(ctx PromptContext) []openai.ChatCompletionMessageParamUnion {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/prompt_context_local.go` at line 126, The function
promptContextToChatCompletionMessages has an unused parameter supportsVideoURL;
remove this parameter from the signature and from all call sites (i.e., update
calls to promptContextToChatCompletionMessages to pass only the PromptContext),
or if video URL handling is required, implement the intended behavior by using
supportsVideoURL inside promptContextToChatCompletionMessages to include or
transform video URL-related content into the produced
openai.ChatCompletionMessageParamUnion messages; ensure you update any
references/tests that call this function to match the new signature or behavior.

31-31: Redundant TrimSpace call.

Both *dst and text are already trimmed (line 23 trims text, and *dst would only be set from a previously trimmed value via line 28). The outer TrimSpace on line 31 is unnecessary.

Suggested simplification
-	*dst = strings.TrimSpace(*dst + "\n\n" + text)
+	*dst = *dst + "\n\n" + text
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/prompt_context_local.go` at line 31, The TrimSpace call wrapping
the concatenation is redundant—since text is already trimmed and dst only ever
holds trimmed values, replace the assignment using strings.TrimSpace with a
direct concatenation: set *dst = *dst + "\n\n" + text (or use strings.Join if
you prefer) inside the same function so you update the pointer dst without the
unnecessary strings.TrimSpace call.

364-374: hasUnsupportedResponsesPromptContext may miss unsupported block types in tool result messages.

The function iterates all messages and blocks but only checks block types. For PromptRoleToolResult messages, the Responses API path (lines 115-120) expects specific fields (ToolCallID, text content). If a tool result message has unexpected block types, this function would return false (supported), but promptMessageToResponsesInputs would silently return nil. Consider whether this is the intended behavior or if additional validation is needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/prompt_context_local.go` around lines 364 - 374, The current
hasUnsupportedResponsesPromptContext only checks block.Type and can incorrectly
mark PromptRoleToolResult messages as supported even when they lack the specific
fields/blocks needed by promptMessageToResponsesInputs; update
hasUnsupportedResponsesPromptContext to special-case messages with Role ==
PromptRoleToolResult and validate that they contain the expected tool-result
structure (required ToolCallID present and at least the expected text/result
block types), and return true if those conditions are not met so the caller and
promptMessageToResponsesInputs remain consistent.
bridges/ai/streaming_responses_api.go (1)

169-172: Duplicated continuation condition.

The same condition len(state.pendingFunctionOutputs) > 0 || len(state.pendingMcpApprovals) > 0 || len(state.pendingSteeringPrompts) > 0 appears on both lines 169 and 172. Consider extracting to a helper method on streamingState for clarity and maintainability.

Suggested refactor

Add a method to streamingState:

func (s *streamingState) hasPendingActions() bool {
    return len(s.pendingFunctionOutputs) > 0 || len(s.pendingMcpApprovals) > 0 || len(s.pendingSteeringPrompts) > 0
}

Then simplify the returns:

 	if done {
-		return state != nil && (len(state.pendingFunctionOutputs) > 0 || len(state.pendingMcpApprovals) > 0 || len(state.pendingSteeringPrompts) > 0), nil, nil
+		return state != nil && state.hasPendingActions(), nil, nil
 	}

-	return state != nil && (len(state.pendingFunctionOutputs) > 0 || len(state.pendingMcpApprovals) > 0 || len(state.pendingSteeringPrompts) > 0), nil, nil
+	return state != nil && state.hasPendingActions(), nil, nil
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/streaming_responses_api.go` around lines 169 - 172, Extract the
duplicated continuation condition into a helper on streamingState: implement
func (s *streamingState) hasPendingActions() bool that returns whether
len(s.pendingFunctionOutputs)>0 || len(s.pendingMcpApprovals)>0 ||
len(s.pendingSteeringPrompts)>0, then replace the two return expressions in
streaming_responses_api.go to call state.hasPendingActions() (e.g., return state
!= nil && state.hasPendingActions(), nil, nil) to remove duplication and improve
clarity.
bridges/ai/prompt_projection_local.go (1)

101-109: Pick the latest user message here, not index 0.

The comment says "latest user message", but Line 107 always reads the first slice element. That's fine for today's single-element caller, but it makes this helper easy to misuse and silently project stale input if a longer slice slips through.

♻️ Small hardening
 func turnDataFromUserPromptMessages(messages []PromptMessage) (sdk.TurnData, bool) {
 	if len(messages) == 0 {
 		return sdk.TurnData{}, false
 	}
-	msg := messages[0]
-	if msg.Role != PromptRoleUser {
+	var (
+		msg   PromptMessage
+		found bool
+	)
+	for i := len(messages) - 1; i >= 0; i-- {
+		if messages[i].Role == PromptRoleUser {
+			msg = messages[i]
+			found = true
+			break
+		}
+	}
+	if !found {
 		return sdk.TurnData{}, false
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/prompt_projection_local.go` around lines 101 - 109,
turnDataFromUserPromptMessages currently uses messages[0] which picks the first
element instead of the latest; change the function to scan the messages slice
from the end to find the most recent PromptRoleUser entry (or pick
messages[len(messages)-1] if you prefer a direct latest-element approach) and
use that user message for the projected sdk.TurnData so the helper won't
silently return stale input when given a longer slice; reference the messages
slice, PromptRoleUser check, and turnDataFromUserPromptMessages when updating
the logic.
bridges/ai/integrations_config.go (1)

112-117: Centralize defaulting for the new agent fields.

This file already uses WithDefaults() on other config types, but connector.go still allocates &AgentDefaultsConfig{} and only patches Compaction. A matching AgentDefaultsConfig.WithDefaults() would keep initialization from drifting across files as this struct grows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/integrations_config.go` around lines 112 - 117,
AgentDefaultsConfig is being constructed ad-hoc in connector.go (allocated as
&AgentDefaultsConfig{} and only Compaction patched) so defaulting drifts as new
fields (Model, ImageModel, ImageGeneration, PDFModel, PDFEngine, Compaction) are
added; implement an AgentDefaultsConfig.WithDefaults() method that sets sensible
defaults for Compaction and any new agent fields and calls nested types'
WithDefaults() (e.g., ModelSelectionConfig.WithDefaults(),
airuntime.PruningConfig defaults) and then replace the &AgentDefaultsConfig{}
allocation in connector.go with AgentDefaultsConfig.WithDefaults() so
initialization is centralized and future fields get defaulted consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bridges/ai/integrations_config.go`:
- Around line 447-456: The Provider method currently normalizes the lookup key
but does not prevent multiple differently-cased keys from colliding in
cfg.Providers; update configuration loading to normalize keys (trim+ToLower)
when populating ModelsConfig.Providers and fail fast (return an error or panic
during load) if a normalized key is already present, or alternatively add a
validation step that iterates cfg.Providers and detects duplicate normalized
keys and returns an explicit error; reference ModelsConfig.Provider and the
cfg.Providers map (and the "key" variable used in the loop) to locate the
collision handling and normalization logic to change.

In `@bridges/ai/prompt_projection_local.go`:
- Around line 60-64: The tool-argument canonicalization currently leaves plain
string inputs unquoted, causing non-JSON output; update
canonicalPromptToolArguments (the path that handles sdk.TurnPart.Input when the
value is a string) to JSON-encode string arguments (e.g., marshal the string to
JSON so "abc" becomes "\"abc\"") before returning, and apply the same change to
the other occurrence that builds PromptBlock.ToolCallArguments (where
PromptBlock{Type: PromptBlockToolCall, ToolCallID, ToolName, ToolCallArguments:
canonicalPromptToolArguments(part.Input)} is used) so string inputs are
consistently valid JSON across both uses.

---

Nitpick comments:
In `@bridges/ai/integrations_config.go`:
- Around line 112-117: AgentDefaultsConfig is being constructed ad-hoc in
connector.go (allocated as &AgentDefaultsConfig{} and only Compaction patched)
so defaulting drifts as new fields (Model, ImageModel, ImageGeneration,
PDFModel, PDFEngine, Compaction) are added; implement an
AgentDefaultsConfig.WithDefaults() method that sets sensible defaults for
Compaction and any new agent fields and calls nested types' WithDefaults()
(e.g., ModelSelectionConfig.WithDefaults(), airuntime.PruningConfig defaults)
and then replace the &AgentDefaultsConfig{} allocation in connector.go with
AgentDefaultsConfig.WithDefaults() so initialization is centralized and future
fields get defaulted consistently.

In `@bridges/ai/prompt_context_local.go`:
- Line 126: The function promptContextToChatCompletionMessages has an unused
parameter supportsVideoURL; remove this parameter from the signature and from
all call sites (i.e., update calls to promptContextToChatCompletionMessages to
pass only the PromptContext), or if video URL handling is required, implement
the intended behavior by using supportsVideoURL inside
promptContextToChatCompletionMessages to include or transform video URL-related
content into the produced openai.ChatCompletionMessageParamUnion messages;
ensure you update any references/tests that call this function to match the new
signature or behavior.
- Line 31: The TrimSpace call wrapping the concatenation is redundant—since text
is already trimmed and dst only ever holds trimmed values, replace the
assignment using strings.TrimSpace with a direct concatenation: set *dst = *dst
+ "\n\n" + text (or use strings.Join if you prefer) inside the same function so
you update the pointer dst without the unnecessary strings.TrimSpace call.
- Around line 364-374: The current hasUnsupportedResponsesPromptContext only
checks block.Type and can incorrectly mark PromptRoleToolResult messages as
supported even when they lack the specific fields/blocks needed by
promptMessageToResponsesInputs; update hasUnsupportedResponsesPromptContext to
special-case messages with Role == PromptRoleToolResult and validate that they
contain the expected tool-result structure (required ToolCallID present and at
least the expected text/result block types), and return true if those conditions
are not met so the caller and promptMessageToResponsesInputs remain consistent.

In `@bridges/ai/prompt_projection_local.go`:
- Around line 101-109: turnDataFromUserPromptMessages currently uses messages[0]
which picks the first element instead of the latest; change the function to scan
the messages slice from the end to find the most recent PromptRoleUser entry (or
pick messages[len(messages)-1] if you prefer a direct latest-element approach)
and use that user message for the projected sdk.TurnData so the helper won't
silently return stale input when given a longer slice; reference the messages
slice, PromptRoleUser check, and turnDataFromUserPromptMessages when updating
the logic.

In `@bridges/ai/streaming_responses_api.go`:
- Around line 169-172: Extract the duplicated continuation condition into a
helper on streamingState: implement func (s *streamingState) hasPendingActions()
bool that returns whether len(s.pendingFunctionOutputs)>0 ||
len(s.pendingMcpApprovals)>0 || len(s.pendingSteeringPrompts)>0, then replace
the two return expressions in streaming_responses_api.go to call
state.hasPendingActions() (e.g., return state != nil &&
state.hasPendingActions(), nil, nil) to remove duplication and improve clarity.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4026a681-2931-4210-b3fd-43181737bc86

📥 Commits

Reviewing files that changed from the base of the PR and between e60b341 and 9d20093.

📒 Files selected for processing (31)
  • bridges/ai/desktop_api_sessions.go
  • bridges/ai/desktop_api_sessions_test.go
  • bridges/ai/image_generation_tool.go
  • bridges/ai/image_generation_tool_magic_proxy_test.go
  • bridges/ai/integrations.go
  • bridges/ai/integrations_config.go
  • bridges/ai/integrations_config_test.go
  • bridges/ai/magic_proxy_test.go
  • bridges/ai/media_understanding_runner_openai_test.go
  • bridges/ai/pending_event.go
  • bridges/ai/prompt_builder.go
  • bridges/ai/prompt_context_local.go
  • bridges/ai/prompt_context_local_test.go
  • bridges/ai/prompt_projection_local.go
  • bridges/ai/response_retry.go
  • bridges/ai/response_retry_test.go
  • bridges/ai/streaming_responses_api.go
  • bridges/ai/text_files.go
  • bridges/dummybridge/bridge.go
  • bridges/openclaw/manager.go
  • config.example.yaml
  • connector_builder_test.go
  • helpers.go
  • pkg/agents/beeper.go
  • pkg/agents/boss.go
  • pkg/agents/presets.go
  • pkg/integrations/memory/integration.go
  • pkg/integrations/runtime/helpers.go
  • sdk/connector_helpers.go
  • sdk/connector_hooks_test.go
  • sdk/conversation_state_test.go
✅ Files skipped from review due to trivial changes (7)
  • sdk/conversation_state_test.go
  • pkg/agents/beeper.go
  • pkg/agents/boss.go
  • bridges/ai/pending_event.go
  • sdk/connector_hooks_test.go
  • bridges/ai/response_retry.go
  • pkg/integrations/memory/integration.go
🚧 Files skipped from review as they are similar to previous changes (9)
  • bridges/openclaw/manager.go
  • bridges/ai/magic_proxy_test.go
  • bridges/ai/media_understanding_runner_openai_test.go
  • bridges/ai/response_retry_test.go
  • pkg/agents/presets.go
  • bridges/ai/image_generation_tool_magic_proxy_test.go
  • bridges/ai/image_generation_tool.go
  • config.example.yaml
  • bridges/ai/prompt_builder.go
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: build-docker
  • GitHub Check: build-agentremote-docker (arm64)
  • GitHub Check: build-agentremote-docker (amd64)
  • GitHub Check: Lint
  • GitHub Check: build-docker
  • GitHub Check: Lint
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-03-16T09:01:24.464Z
Learnt from: batuhan
Repo: beeper/agentremote PR: 71
File: bridges/ai/connector.go:53-63
Timestamp: 2026-03-16T09:01:24.464Z
Learning: In package ai, the AI connector’s configuration type (Config) defines Bridge as a value field of type BridgeConfig (not a pointer). Therefore, accessing oc.Config.Bridge.CommandPrefix in OpenAIConnector.applyRuntimeDefaults (bridges/ai/connector.go) is safe and does not require a nil-check.

Applied to files:

  • bridges/ai/integrations_config_test.go
  • connector_builder_test.go
  • helpers.go
  • bridges/ai/integrations_config.go
  • bridges/ai/integrations.go
  • sdk/connector_helpers.go
🔇 Additional comments (33)
connector_builder_test.go (2)

17-39: Strong hook-contract coverage improvement.

Great update: this now verifies Init/Start/Stop all receive the exact same *bridgev2.Bridge and keeps lifecycle order assertions intact. This is a solid regression guard for connector callback wiring.


166-168: Signature update is correct and keeps test intent clear.

Adopting the new Stop(ctx, br) callback shape here is clean, and the test still validates cached client shutdown behavior as expected.

bridges/ai/desktop_api_sessions.go (1)

161-179: Credential-path migration and default-token merge look solid.

The new loginCredentials(loginMetadata(...)) path is guarded correctly, and the fallback token merge into the default instance preserves existing config fields while filling a missing token.

bridges/ai/desktop_api_sessions_test.go (1)

12-43: Great targeted regression test.

This test cleanly validates the fallback DesktopAPI token merge into the "default" instance while ensuring BaseURL remains unchanged.

bridges/dummybridge/bridge.go (6)

34-39: LGTM!

Clean validation helper with appropriate nil checks for both the session pointer and nested login field.


41-41: LGTM!

Returning the concrete *dummySession type instead of any improves type safety and eliminates the need for runtime type assertions by callers.


60-60: LGTM!

Empty disconnect handler is fine for a dummy bridge, and the typed parameter is consistent with the refactoring.


62-72: LGTM!

Consistent use of requireSession for validation and typed *dummySession parameter improves type safety.


74-97: LGTM!

Same consistent pattern as getContactList - typed parameter with proper validation before use.


99-108: LGTM!

Completes the typed session refactoring consistently across all connector callbacks.

bridges/ai/text_files.go (4)

8-10: LGTM!

Standard library imports required for temp file handling and external command execution.


28-28: LGTM!

50 MB limit is reasonable for PDF files, which are typically larger than plain text but will be converted to much smaller text output by pdftotext.


199-237: Well-implemented PDF text extraction with good security practices.

The implementation handles:

  • Proper temp file cleanup via defer
  • Secure file permissions (0o600)
  • Context-aware command execution for cancellation support
  • Comprehensive error handling with helpful messages for missing dependencies
  • Graceful handling of PDFs with no extractable text

One minor note: This introduces an external runtime dependency on pdftotext (from poppler-utils). The error message at line 220 helpfully guides users to install it.


239-261: LGTM!

Clean integration of the truncated parameter. The leading space in the attribute string ensures correct XML formatting:

  • When truncated=false: <file name="..." mime="...">
  • When truncated=true: <file name="..." mime="..." truncated="true">
bridges/ai/prompt_context_local.go (2)

204-221: Tool-call argument normalization is correctly implemented.

The Chat Completions path now normalizes blank ToolCallArguments to "{}" (lines 208-211), matching the Responses API path (lines 108-111). This ensures consistent serialization across both providers, addressing the previous review feedback.


269-280: No nil-safety issue — this pattern is safe.

OfString is of type param.Opt[string] (a value type, not a pointer), so accessing .Value directly cannot panic. The code is safe as written and matches the pattern used throughout the SDK and codebase (e.g., error_logging.go:79 calls .Valid() without nil checks).

However, a more explicit pattern would be to check .Valid() first before accessing the value:

if content.OfString.Valid() && content.OfString.Value != "" {
    return content.OfString.Value
}

This makes the discriminated union handling clearer, though the current code is functionally correct since comparing an unset param.Opt[string] to empty string yields false as expected.

bridges/ai/prompt_context_local_test.go (1)

5-23: Good test coverage for the normalization fix.

The test correctly validates that whitespace-only ToolCallArguments are normalized to "{}". Consider adding additional test cases for edge cases like empty string "" and valid JSON arguments to ensure they're handled correctly.

bridges/ai/streaming_responses_api.go (3)

118-120: Steering prompts are now correctly included in the continuation guard.

The guard condition now checks all three pending queues (pendingFunctionOutputs, pendingMcpApprovals, and pendingSteeringPrompts), ensuring queued steering prompts are not dropped. This addresses the previous review feedback.


37-54: Clean refactor to use PromptContext for input building.

The initialization now correctly derives Responses API input from a.prompt using promptContextToResponsesInput, and passes the system prompt separately to buildResponsesAgentLoopParams. This aligns with the centralized prompt handling approach.


449-472: Function signature change is consistent with the refactoring.

The rename from runResponsesAgentLoop to runResponsesAgentLoopPrompt and the switch to accepting PromptContext aligns with the broader refactoring pattern. The relevant code snippet from response_retry.go confirms callers have been updated accordingly.

pkg/integrations/runtime/helpers.go (1)

9-15: LGTM!

The simplification is valid since Host.RawLogger() is explicitly typed to return zerolog.Logger in the interface definition (see pkg/integrations/runtime/module_hooks.go:151), making the previous type assertion redundant. The nil check on line 12 handles the case when host itself is nil.

helpers.go (1)

452-468: LGTM – the past review concern has been addressed.

The nil guard on line 453 now properly checks portal.Bridge and portal.Bridge.Bot before dereferencing, which resolves the previously flagged panic risk. The new implementation correctly uses SendState with proper error handling and logging.

sdk/connector_helpers.go (4)

19-31: LGTM!

The generic conversion of BuildStandardMetaTypes is well-designed. The wrapper closures correctly adapt typed constructors to the func() any signature required by agentremote.BuildMetaTypes, providing type safety at call sites while maintaining backward compatibility.


40-48: LGTM – the past review concern has been addressed.

ResolveCommandPrefix now correctly returns the trimmed prefix rather than the original whitespace-padded value, fixing the issue where " /" would fail to match commands.


97-134: LGTM!

The generic struct definition correctly threads type parameters through all relevant fields. GetCapabilities now properly accepts SessionT, and the metadata constructor functions (NewPortal, NewMessage, NewLogin, NewGhost) are appropriately typed, improving type safety at call sites.


151-161: Consider adding a nil-check for DefaultCommandPrefix or document it as required.

Line 159 calls p.DefaultCommandPrefix() without checking if it's nil. While all current callers in the codebase provide this field, the API does not enforce it, and a caller could inadvertently omit it, causing a panic. Either add a defensive nil-check or document that DefaultCommandPrefix must be provided.

bridges/ai/prompt_projection_local.go (1)

24-34: Image round-trip looks consistent.

Forward and reverse projection both preserve URL, MIME type, and imageB64, while filtering empty image parts in both directions.

Also applies to: 123-126

bridges/ai/integrations.go (5)

243-243: Nice narrowing of the module registry API.

Threading integrationruntime.ModuleHooks through storage and lifecycle helpers removes the loose any surface without losing the later opt-in assertions for specific hooks.

Also applies to: 286-301, 331-348, 355-364, 415-421


272-274: The module-first/core-last ordering is the right precedence.

With first-wins behavior in both definition collection and execution dispatch, registering core tools after modules preserves intentional module overrides.


608-608: Good simplification on the core tool path.

call.Scope.Portal already has the exact type executeBuiltinToolDirect expects, so removing the redundant assertion keeps this path aligned with the runtime contract.


536-544: The purge integrations can safely operate from IDs alone. There is only one PurgeForLogin implementation (in pkg/integrations/memory/integration.go), and it uses only scope.BridgeID and scope.LoginID, plus i.resolveBridgeDB() which retrieves the database through i.host.BridgeDB() with no dependency on the full UserLogin object. The narrowed LoginScope is sufficient.


227-231: Verify that command integrations have not silently lost event/sender context after CommandScope API contraction.

CommandScope now contains only Portal and Meta, no longer carrying triggering event or sender information. This appears at three locations: line 227-231 in commandScope(), and similar returns at lines 258-259 and 480-485. Confirm that every CommandIntegration implementation either:

  1. Only uses Portal/Meta from the scope, or
  2. Explicitly retrieves any needed event/sender context from context.Context (e.g., via the CommandCall parameter in ExecuteCommand)

Without this verification, integrations risk silently dropping event or sender context that may be needed for proper execution.

bridges/ai/integrations_config_test.go (1)

5-16: Nice regression coverage for normalized provider names.

This locks in the whitespace/case-insensitive lookup path that ModelsConfig.Provider() now depends on.

Add synchronization to pending queue operations by introducing a mutex and locking critical sections to ensure thread-safety. Normalize model provider keys during YAML unmarshalling (ModelsConfig.UnmarshalYAML) so lookups are case/whitespace-insensitive and reject collisions; update Provider() to use the normalized map. Improve canonicalPromptToolArguments to preserve canonical JSON, JSON-decode strings when possible, and JSON-encode plain strings; add tests for this behavior. Refactor agent_loop_steering_test to add subtests for nil vs non-nil prompt handling and add tests for model normalization collision detection.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
bridges/ai/pending_queue.go (1)

268-272: ⚠️ Potential issue | 🔴 Critical

Handle summary-only snapshots before touching snapshot.items[0].

This branch can run with droppedCount > 0 and len(snapshot.items) == 0, because summarized queues are kept around until the summary is consumed. Line 269 panics before the lastItem fallback can run. Pick lastItem first, and only index items[0] when the slice is non-empty.

Possible fix
-		item := snapshot.items[0]
-		if snapshot.lastItem != nil {
-			item = *snapshot.lastItem
-		}
+		var item pendingQueueItem
+		switch {
+		case snapshot.lastItem != nil:
+			item = *snapshot.lastItem
+		case len(snapshot.items) > 0:
+			item = snapshot.items[0]
+		default:
+			return nil, snapshot
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/pending_queue.go` around lines 268 - 272, The branch handling
summarized snapshots reads snapshot.items[0] before checking that the slice is
non-empty, which can panic when droppedCount>0 and items is empty; update the
logic in the block that checks snapshot.dropPolicy ==
airuntime.QueueDropSummarize to prefer snapshot.lastItem (if non-nil) and only
access snapshot.items[0] when len(snapshot.items) > 0, i.e., choose lastItem
first and fall back to items[0] only when the slice has elements, ensuring no
index is taken on an empty slice.
♻️ Duplicate comments (1)
bridges/ai/pending_queue.go (1)

153-161: ⚠️ Potential issue | 🔴 Critical

Same locking gap remains in these helpers.

The len(queue.items), queue.droppedCount, and queue.draining checks on Lines 157, 212, and 376 still happen before queue.mu.Lock(). enqueuePendingItem mutates those fields while holding only queue.mu, so this is the same race in a different spot. Only the queue == nil / count <= 0 checks should stay outside; move the field-based early returns under queue.mu and rerun the race detector.

Also applies to: 208-216, 372-383

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/pending_queue.go` around lines 153 - 161, The early-return checks
that inspect queue-internal fields are racing with enqueuePendingItem: move the
checks of queue.items length, queue.droppedCount and queue.draining so they
occur only after acquiring queue.mu (i.e. keep only the nil and count <= 0
checks outside the pendingQueuesMu-protected block, then call queue.mu.Lock()
and perform len(queue.items), queue.droppedCount and queue.draining checks
inside popQueueItems and the other helpers at lines referenced); ensure you
still defer queue.mu.Unlock() and rerun the race detector to verify no races
remain between pendingQueuesMu/pendingQueues access and queue-internal mutations
made by enqueuePendingItem.
🧹 Nitpick comments (2)
bridges/ai/prompt_projection_local_test.go (1)

11-15: Consider adding edge case tests.

The current tests cover the main happy paths. Consider adding tests for edge cases such as:

  • Empty string input → should return "{}"
  • Whitespace-only string → should return "{}"
  • nil input → should return "{}"
  • Non-string types (e.g., map[string]any)

These would improve confidence in the canonicalization logic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/prompt_projection_local_test.go` around lines 11 - 15, Add new
unit tests alongside TestCanonicalPromptToolArgumentsPreservesJSONStrings that
cover edge cases for canonicalPromptToolArguments: verify that empty string
input and whitespace-only string return `"{}"`, verify that passing nil returns
`"{}"`, and verify that passing a non-string value (e.g., a map[string]any) is
canonicalized as `"{}"` or the defined default; create separate subtests or
Test... functions calling canonicalPromptToolArguments with each of these inputs
and use t.Fatalf/t.Errorf to assert the returned string equals the expected
`"{}"`.
bridges/ai/agent_loop_steering_test.go (1)

169-229: Add a regression case for the summary-only queue state.

These synthetic-summary tests always keep one live item in items, so they don't exercise the droppedCount > 0 && len(items) == 0 path that now depends on lastItem. A subtest with items: nil and lastItem populated would lock in the fix for the panic in takePendingQueueDispatchCandidate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/agent_loop_steering_test.go` around lines 169 - 229, Add a
regression test that exercises the "summary-only" pending queue state where
droppedCount > 0 and items is nil but lastItem is populated to ensure
takePendingQueueDispatchCandidate no longer panics; specifically, in the
existing TestGetFollowUpMessages_... tests (or a new
TestGetFollowUpMessages_SummaryOnlyQueue), construct an AIClient.pendingQueues
entry for the roomID with mode = airuntime.QueueModeFollowup, dropPolicy =
airuntime.QueueDropSummarize, droppedCount > 0, summaryLines populated, items:
nil, and lastItem set to a pendingQueueItem (e.g., pendingMessage Type
pendingTypeText with MessageBody "latest"); then call
getFollowUpMessagesForTest(oc, roomID) once to assert the synthetic summary
message is returned and again to assert the lastItem ("latest") is returned and
finally that the queue is drained, which locks in the fix in
takePendingQueueDispatchCandidate that handles lastItem when items is empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bridges/ai/pending_queue.go`:
- Around line 181-189: The code currently does a shallow struct copy of
pendingQueue via "clone := *queue", which copies the sync.Mutex (field mu) and
violates copylocks; instead, construct the snapshot by assigning each
exported/internal field explicitly while skipping mu: create a new pendingQueue
value (e.g., var clone pendingQueue) and set clone.items =
slices.Clone(queue.items), clone.summaryLines =
slices.Clone(queue.summaryLines), clone.lastItem = copy of queue.lastItem when
non-nil, and copy any other fields individually (but do not copy mu). Ensure the
lock/unlock around this copy remains and remove the "*queue" struct-copy.

---

Outside diff comments:
In `@bridges/ai/pending_queue.go`:
- Around line 268-272: The branch handling summarized snapshots reads
snapshot.items[0] before checking that the slice is non-empty, which can panic
when droppedCount>0 and items is empty; update the logic in the block that
checks snapshot.dropPolicy == airuntime.QueueDropSummarize to prefer
snapshot.lastItem (if non-nil) and only access snapshot.items[0] when
len(snapshot.items) > 0, i.e., choose lastItem first and fall back to items[0]
only when the slice has elements, ensuring no index is taken on an empty slice.

---

Duplicate comments:
In `@bridges/ai/pending_queue.go`:
- Around line 153-161: The early-return checks that inspect queue-internal
fields are racing with enqueuePendingItem: move the checks of queue.items
length, queue.droppedCount and queue.draining so they occur only after acquiring
queue.mu (i.e. keep only the nil and count <= 0 checks outside the
pendingQueuesMu-protected block, then call queue.mu.Lock() and perform
len(queue.items), queue.droppedCount and queue.draining checks inside
popQueueItems and the other helpers at lines referenced); ensure you still defer
queue.mu.Unlock() and rerun the race detector to verify no races remain between
pendingQueuesMu/pendingQueues access and queue-internal mutations made by
enqueuePendingItem.

---

Nitpick comments:
In `@bridges/ai/agent_loop_steering_test.go`:
- Around line 169-229: Add a regression test that exercises the "summary-only"
pending queue state where droppedCount > 0 and items is nil but lastItem is
populated to ensure takePendingQueueDispatchCandidate no longer panics;
specifically, in the existing TestGetFollowUpMessages_... tests (or a new
TestGetFollowUpMessages_SummaryOnlyQueue), construct an AIClient.pendingQueues
entry for the roomID with mode = airuntime.QueueModeFollowup, dropPolicy =
airuntime.QueueDropSummarize, droppedCount > 0, summaryLines populated, items:
nil, and lastItem set to a pendingQueueItem (e.g., pendingMessage Type
pendingTypeText with MessageBody "latest"); then call
getFollowUpMessagesForTest(oc, roomID) once to assert the synthetic summary
message is returned and again to assert the lastItem ("latest") is returned and
finally that the queue is drained, which locks in the fix in
takePendingQueueDispatchCandidate that handles lastItem when items is empty.

In `@bridges/ai/prompt_projection_local_test.go`:
- Around line 11-15: Add new unit tests alongside
TestCanonicalPromptToolArgumentsPreservesJSONStrings that cover edge cases for
canonicalPromptToolArguments: verify that empty string input and whitespace-only
string return `"{}"`, verify that passing nil returns `"{}"`, and verify that
passing a non-string value (e.g., a map[string]any) is canonicalized as `"{}"`
or the defined default; create separate subtests or Test... functions calling
canonicalPromptToolArguments with each of these inputs and use t.Fatalf/t.Errorf
to assert the returned string equals the expected `"{}"`.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ac3f92de-5fee-426a-b595-df98b09d389d

📥 Commits

Reviewing files that changed from the base of the PR and between 9d20093 and 93ca4f3.

📒 Files selected for processing (6)
  • bridges/ai/agent_loop_steering_test.go
  • bridges/ai/integrations_config.go
  • bridges/ai/integrations_config_test.go
  • bridges/ai/pending_queue.go
  • bridges/ai/prompt_projection_local.go
  • bridges/ai/prompt_projection_local_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • bridges/ai/integrations_config_test.go
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: build-docker
  • GitHub Check: build-agentremote-docker (amd64)
  • GitHub Check: Lint
  • GitHub Check: Lint
  • GitHub Check: build-docker
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-03-16T09:01:24.464Z
Learnt from: batuhan
Repo: beeper/agentremote PR: 71
File: bridges/ai/connector.go:53-63
Timestamp: 2026-03-16T09:01:24.464Z
Learning: In package ai, the AI connector’s configuration type (Config) defines Bridge as a value field of type BridgeConfig (not a pointer). Therefore, accessing oc.Config.Bridge.CommandPrefix in OpenAIConnector.applyRuntimeDefaults (bridges/ai/connector.go) is safe and does not require a nil-check.

Applied to files:

  • bridges/ai/agent_loop_steering_test.go
  • bridges/ai/integrations_config.go
🔇 Additional comments (10)
bridges/ai/prompt_projection_local_test.go (1)

5-9: LGTM!

Test correctly verifies that a plain string input is JSON-encoded to a proper JSON string literal.

bridges/ai/prompt_projection_local.go (6)

11-99: LGTM!

The promptMessagesFromTurnData function correctly handles user and assistant roles with appropriate block construction. The nil checks, whitespace trimming, and fallback handling (e.g., "Denied by user" for output-denied state) are well implemented.


101-131: LGTM!

The function correctly projects user prompt messages back to SDK turn data. The comment clarifies the intentional design to process only the first message, and the implementation handles both text and image blocks appropriately with proper validation.


133-139: LGTM!

Simple and safe helper function for extracting string values from the extra map.


141-146: LGTM!

Clean normalization function that handles the "dynamic-tool" to "tool" mapping.


148-181: Implementation addresses the previous review concern.

The canonicalPromptToolArguments function now correctly JSON-encodes string arguments. Plain strings like "hello" are properly encoded to "\"hello\"", while already-valid JSON strings are preserved after canonicalization. The tests in the companion test file confirm this behavior.


183-196: LGTM!

Clean utility function with appropriate fallback handling for edge cases where JSON marshaling might fail.

bridges/ai/integrations_config.go (3)

113-118: LGTM!

The new model selection and compaction fields are well-structured. Using pointers allows proper nil-checking for optional configuration, and the Compaction field integrates correctly with the runtime's pruningConfigOrDefault() pattern shown in runtime_compaction_adapter.go.


448-477: Well-implemented provider key normalization.

The UnmarshalYAML method correctly:

  1. Normalizes keys at load time (trim + lowercase)
  2. Rejects empty provider keys with a clear error
  3. Detects and rejects duplicate normalized keys (e.g., "OpenAI" vs " openai ")

This makes the Provider() lookup straightforward since all keys are pre-normalized. The previous concerns about case-sensitive lookups and nondeterministic collision behavior are now fully addressed.


229-240: No action needed. LinkPreviewConfig is properly defined in bridges/ai/linkpreview.go in the same package, making it available to ToolProvidersConfig in integrations_config.go. The restructuring is valid.

Comment on lines +181 to +189
queue.mu.Lock()
defer queue.mu.Unlock()
clone := *queue
clone.items = slices.Clone(queue.items)
clone.summaryLines = slices.Clone(queue.summaryLines)
if queue.lastItem != nil {
lastItem := *queue.lastItem
clone.lastItem = &lastItem
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't struct-copy a queue that now owns a sync.Mutex.

Line 183 copies pendingQueue.mu via clone := *queue. sync.Mutex must not be copied after first use, and this pattern is typically caught by go vet's copylocks check. Build the snapshot field-by-field instead, then rerun static analysis.

Possible fix
-	clone := *queue
-	clone.items = slices.Clone(queue.items)
-	clone.summaryLines = slices.Clone(queue.summaryLines)
+	clone := pendingQueue{
+		items:          slices.Clone(queue.items),
+		draining:       queue.draining,
+		lastEnqueuedAt: queue.lastEnqueuedAt,
+		mode:           queue.mode,
+		debounceMs:     queue.debounceMs,
+		cap:            queue.cap,
+		dropPolicy:     queue.dropPolicy,
+		droppedCount:   queue.droppedCount,
+		summaryLines:   slices.Clone(queue.summaryLines),
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bridges/ai/pending_queue.go` around lines 181 - 189, The code currently does
a shallow struct copy of pendingQueue via "clone := *queue", which copies the
sync.Mutex (field mu) and violates copylocks; instead, construct the snapshot by
assigning each exported/internal field explicitly while skipping mu: create a
new pendingQueue value (e.g., var clone pendingQueue) and set clone.items =
slices.Clone(queue.items), clone.summaryLines =
slices.Clone(queue.summaryLines), clone.lastItem = copy of queue.lastItem when
non-nil, and copy any other fields individually (but do not copy mu). Ensure the
lock/unlock around this copy remains and remove the "*queue" struct-copy.

@batuhan batuhan merged commit 02ccb6c into main Mar 30, 2026
10 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant