Prompt Injection
Prompt Injection is one of the key mechanisms for Nekro Agent plugins to influence AI behavior, provide contextual information, or customize their roles. By dynamically adding content to AI's main prompt (System Prompt), plugins can guide AI's thinking direction and response style.
What is Prompt Injection?
Before AI interacts with users, the system typically constructs a main prompt containing instructions, contextual information, role definitions, and other content. Prompt injection allows plugins to dynamically "inject" session-related, plugin-specific information into the main prompt during this construction process.
The purpose of this is:
- Provide Context: Inform AI about the specific state of the current session, user's historical preferences, relevant information collected by the plugin, etc.
- Grant Capabilities/Roles: Instruct AI to use specific tools provided by the plugin, or play a specific role (e.g., "You are now a helpful weather forecaster").
- Set Rules/Constraints: Establish some guiding principles or limitation conditions for AI's behavior.
Registering Prompt Injection Methods
Plugins register an asynchronous function through the @plugin.mount_prompt_inject_method() decorator, which is responsible for generating the prompt content that needs to be injected.
from nekro_agent.api.schemas import AgentCtx
from nekro_agent.api import core
@plugin.mount_prompt_inject_method(
name="status_awareness_prompt", # Name of the injection method, used for debugging and identification
description="Inject current session status information and available tool prompts to AI."
)
async def inject_status_prompt(_ctx: AgentCtx) -> str:
"""Generate and return the string that needs to be injected into the main prompt.
Returns:
str: The prompt text to be injected.
"""
# Example: Get current session status information from plugin storage
current_status = await plugin.store.get(chat_key=_ctx.from_chat_key, store_key="current_channel_status")
prompt_parts = []
if current_status:
prompt_parts.append(f"Current status prompt: Session status is currently '{current_status}'.")
else:
prompt_parts.append("Current status prompt: Session status is unknown or in default state.")
# Example: Remind AI of plugin tools it can use (more complex tool descriptions should be provided through sandbox method documentation)
prompt_parts.append("You can use the 'get_weather(city)' tool to query weather.")
prompt_parts.append("You can use the 'set_reminder(time_desc,message)' tool to set reminders.")
# Final injected prompt
injected_prompt = "\n".join(prompt_parts)
core.logger.debug(f"Injected prompt for session {_ctx.from_chat_key}: \n{injected_prompt}")
return injected_promptKey Points:
- The injection method must be an asynchronous function (
async def). - It receives an
AgentCtxobject as a parameter, from which session information can be obtained. - It must return a string, which will be concatenated into AI's main prompt.
- The
nameanddescriptionparameters are used to identify this injection method.
When is it Executed?
Prompt injection methods are typically called in the following situations:
- Before each user interaction with AI
Designing Effective Injection Prompts
A good injection prompt can significantly enhance the plugin's utility and AI's performance. Here are some preliminary design principles:
- Concise and Clear: The injected content should be as short, clear, and easy for AI to understand as possible. Avoid verbosity and unnecessary complexity.
- Highly Relevant: Only inject information closely related to the current session, current user, or plugin's core functionality. Irrelevant information increases AI's processing burden and may even cause misleading.
- Structured: If injecting multiple pieces of information, try to use a consistent format, such as specific prefixes, line breaks, or bullet points, to help AI distinguish between different information fragments.
- Dynamically Generated: Fully utilize information from
AgentCtxand plugin storage to dynamically generate prompt content that best matches the current context. - Clear Instructions (if needed): If you want AI to use specific tools or follow specific behaviors, you can give clear instructions in the prompt. However, more complex tool usage instructions should be placed in the documentation strings of sandbox methods.
- Avoid Conflicts: Be aware whether your injection prompt might conflict or cause ambiguity with other plugins or system-level prompts.
- Iterate and Test: Prompt engineering is often a process that requires continuous trial and optimization. Test the effectiveness of injection prompts through actual testing and adjust them based on AI's feedback.
Example: Simple Role-playing Prompt
@plugin.mount_prompt_inject_method(name="role_play_prompt")
async def inject_role_prompt(_ctx: AgentCtx) -> str:
# Assume there is a role definition in plugin configuration
role_description = plugin.config.ROLE_PLAY_CHARACTER
if role_description:
return f"Please play the following role: {role_description}. When communicating with users, please maintain this role's characteristics and tone."
return "" # If no role is configured, do not inject any contentThrough carefully designed prompt injection, you can enable plugins to collaborate more intelligently with AI, providing users with a smoother and more personalized experience.
