Skip to content

Prompt

In the Blades framework, Prompt is the core component for interacting with Large Language Models (LLMs). It represents a sequence of messages exchanged between the user and the assistant, supporting various message types (system messages, user messages, assistant messages) as well as templating functionality, enabling developers to build dynamic and context-aware AI applications.

In large language models, a Role indicates the sender of a message. The main roles are as follows:

The user role, representing input initiated by a human user or an external system. User messages can be added using blades.UserMessage, which is defined as follows:

func UserMessage[T contentPart](parts ...T) *Message {
return &Message{ID: NewMessageID(), Role: RoleUser, Author: "user", Parts: Parts(parts...)}
}
1. Asking questions, issuing instructions, providing contextual information.
2. In multi-turn conversations, each new user input should be marked as RoleUser.

The system role, representing pre-set instructions from the system, used to control model behavior, set task objectives, or provide global context.

1. Defining the assistant's identity (e.g., "You are a Go programming expert").
2. Setting output format, language style, safety boundaries, etc.
3. Injecting task descriptions or constraints before a workflow (Workflow) begins.

The assistant role, representing responses generated by the large model or the agent itself.

1. Answering user questions.
2. Invoking tools (via function calls or the Tool Use protocol).
3. Maintaining contextual coherence in multi-turn conversations.
  • All model outputs should be marked as RoleAssistant.
  • The Assistant may proactively state “I need to call a certain tool,” after which the framework executes it and returns a RoleTool message.

The tool role, representing the result of executing an external tool, function, or API.

1. Feeding the return value of a tool call back to the model for continued reasoning.
2. Implementing the ReAct (Reason + Act) loop of "think → call → observe → think again."

In Blades, all message roles support input of multiple message types:

  • TextPart: Plain text content
  • FilePart: File content
  • DataPart: File reference
  • ToolPart: Tool output content

You can use the method corresponding to each role to add one or more pieces of information. Here, blades.UserMessage is used as an example:

input := blades.UserMessage(
blades.TextPart{
Text: "Can you describe the image in logo.svg?",
},
blades.FilePart{
MIMEType: "image/png",
URI: "https://go-kratos.dev/images/architecture.png",
},
)

Plain text messages can be added using blades.TextPart, passing pure text content, such as user questions or AI answers.

blades.TextPart{
Text: "Can you describe the image in logo.svg?",
}

References an existing file (without embedding content) via a URI (e.g., local path, HTTP URL, S3 link, etc.) pointing to the file.

blades.FilePart{
MIMEType: "image/png",
URI: "https://go-kratos.dev/images/architecture.png",
}

When users upload large files and the system only needs to save the path after upload, choosing this method can greatly reduce costs. In distributed systems, files are stored in object storage (e.g., MinIO, S3), and the Agent only needs to access the corresponding link.

Pros: TextPart has a small message body, efficient transmission, supports large files (avoiding memory overflow), and can reuse existing file resources.

Cons: The recipient must be able to access the URI (permissions, network accessibility); files may be deleted or moved, causing failure.

Embeds file bytes (full content), directly embedding the complete binary content of a file into the message.

blades.DataPart{
Name: "cat",
MIMEType: blades.MIMEImagePNG,
Bytes: imagesBytes,
}

Suitable for small files (e.g., icons, short audio, screenshots). Of course, if data integrity must be ensured without relying on external storage, this type can also be chosen. Users can use this type to quickly transfer data during testing or local development.

The advantage of DataPart is that it requires no external dependencies, ensures data consistency, and facilitates data serialization and deserialization (e.g., JSON + Base64).

However, this type also has drawbacks: DataPart requires uploading byte streams, resulting in a large message size, which can impact performance. Uploading large files may cause Out of Memory or network request timeout errors, potentially reducing system stability. Additionally, excessive data transmission consumes more bandwidth.

Records tool call requests and responses, documenting the complete lifecycle of a tool call (call parameters + execution results). The structure of ToolPart is defined as follows:

blades.ToolPart{
ID: "load",
Name: "load",
Request: `{"city": "Beijing"}`,
Response: `{"temp": 18, "condition": "sunny"}`,
}

Preserves tool interaction context in conversation history; additionally, ToolPart supports multi-step reasoning (e.g., Agent first calls tool A, then calls tool B based on the result).

Prompts are the bridge between users and large language models today. Here, we do not discuss how to write a good prompt; instead, we focus on the various ways Blades constructs prompts to suit different scenarios.

input := blades.UserMessage("What is the capital of France?")

blades.UserMessage directly returns a Message struct instance.

func buildPrompt(params map[string]any) (string, error) {
var (
tmpl = "Respond concisely and accurately for a {{.audience}} audience."
buf strings.Builder
)
t, err := template.New("message").Parse(tmpl)
if err != nil {
return "", err
}
if err := t.Execute(&buf, params); err != nil {
return "", err
}
return buf.String(), nil
}

You can build a template function and pass in the corresponding parameters.

prompt, err := buildPrompt(params)
if err != nil {
log.Fatal(err)
}
input := blades.UserMessage(prompt)
package main
import (
"context"
"log"
"os"
"github.com/go-kratos/blades"
"github.com/go-kratos/blades/contrib/openai"
)
func main() {
model := openai.NewModel(os.Getenv("OPENAI_MODEL"), openai.Config{
APIKey: os.Getenv("OPENAI_API_KEY"),
})
agent, err := blades.NewAgent(
"Basic Agent",
blades.WithModel(model),
blades.WithInstruction("You can tell user what's in the pictures"),
)
if err != nil {
log.Fatal(err)
}
imagesBytes, err := os.ReadFile("img.png")
if err != nil {
log.Fatal(err)
}
fileLoad := blades.DataPart{
Name: "cat",
MIMEType: blades.MIMEImagePNG,
Bytes: imagesBytes,
}
input := blades.UserMessage(fileLoad)
runner := blades.NewRunner(agent)
output, err := runner.Run(context.Background(), input)
if err != nil {
log.Fatal(err)
}
log.Println(output.Text())
}
  • Clear System Instructions: Provide clear, specific instructions in system messages to help the model better understand task requirements.
  • Use Templates Appropriately: Leverage templating functionality to improve code reusability and maintainability, especially in scenarios requiring dynamic Prompt generation.
  • Manage Context Length: Pay attention to controlling Prompt length to avoid exceeding the model’s maximum context limit.
  • Error Handling: Always check for errors during template rendering and Prompt construction to ensure application robustness.