Practical Guide: Building a Human-in-the-loop AI Workflow

β€’Linu Teamβ€’5 min read
TutorialAI AgentWebhookPractical

Practical Guide: Building a Human-in-the-loop AI Workflow

With the popularity of LLMs (Large Language Models), we are used to letting AI help us write weekly reports, summarize articles, or generate code. However, in fully automated processes, AI may produce hallucinations or inaccurate content.

Human-in-the-loop (HITL) is a pattern that introduces human feedback into an automated loop. As a push terminal supporting two-way interaction, Linu is perfect for acting as a "remote control" for AI Agents.

This article will demonstrate how to build an AI Daily Briefing Assistant:

  1. Auto Generation: AI automatically crawls tech news and writes summaries every morning.
  2. Human Review: Pushed to your phone, where you can choose "Publish", "Rewrite", or reply directly with modification suggestions.
  3. Closed-loop Execution: Based on your feedback, AI will execute the publishing task or regenerate content.

Scenario Flow

  1. Trigger: A scheduled task triggers a Python script to call the OpenAI API to generate a briefing.
  2. Push: Call the Linu API to send a briefing preview.
  3. Interact:
    • Click [πŸš€ Publish] -> Trigger Action Webhook -> Publish to blog/send email.
    • Click [πŸ”„ Rewrite] -> Trigger Action Webhook -> AI regenerates in a different style.
    • Reply Message -> "Too long, make it concise" -> Trigger Reply Webhook -> AI regenerates based on feedback.

Step 1: Send Interactive Preview

We need to send a message containing an actions field. To support direct replies with modification suggestions, we also need to configure group_config.reply_callback.

Request Example

curl -X POST https://api.linu.app/v1/push \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "ios": ["YOUR_DEVICE_TOKEN"],
    "message": {
        "title": "🌞 Daily AI Briefing (Pending Review)",
        "text": "Today's Headlines\n\n1. OpenAI releases GPT-5: Reasoning capability up 200%...\n2. Apple Vision Pro 2: Lighter and cheaper...\n\n(Click buttons below to publish, or reply directly with changes)",
        "group_id": "ai-daily-news",
        "actions": [
            {
                "label": "πŸš€ Publish",
                "callback": "https://your-agent.com/hooks/action",
                "method": "POST",
                "payload": "publish_draft_20260209"
            },
            {
                "label": "πŸ”„ Rewrite",
                "callback": "https://your-agent.com/hooks/action",
                "method": "POST",
                "payload": "rewrite_draft_20260209"
            }
        ],
        "group_config": {
            "name": "AI Editor",
            "reply_callback": {
                "url": "https://your-agent.com/hooks/reply",
                "method": "POST"
            }
        }
    },
    "options": {
        "priority": "normal"
    }
  }'

Key Fields Explanation

  • actions: Defines shortcut action buttons. The payload field is pass-through data; when a button is clicked, it is sent back to your server as is.
  • group_config.reply_callback: Defines the address Linu Server should call when a user replies directly to this message in the App. This is key to implementing "conversational modification".

Step 2: Build Agent Server

You need a service to act as the "brain," processing user feedback and controlling the AI.

Python (Flask) Example

from flask import Flask, request, jsonify
import threading

app = Flask(__name__)

# Mock calling LLM
def ask_llm(prompt):
    print(f"πŸ€– AI is thinking: {prompt}...")
    # Call OpenAI/Claude API here
    return f"This is the regenerated content based on your feedback '{prompt}'..."

# Mock sending push (Calling Linu API)
def push_to_linu(text):
    print(f"πŸ“± Pushing message to Linu: {text[:20]}...")
    # Use requests.post to call Linu API in actual code

@app.route('/hooks/action', methods=['POST'])
def handle_action():
    """Handle button clicks"""
    data = request.json
    # Linu Webhook structure: {"type": "action", "payload": "...", "device_token": "..."}
    
    payload = data.get('payload', '')
    device_token = data.get('device_token')

    print(f"Received action: {payload} from {device_token[-6:]}")

    if payload.startswith('publish_'):
        # Execute publishing logic
        draft_id = payload.split('_')[1]
        push_to_linu(f"βœ… Briefing {draft_id} successfully published to blog!")
        return jsonify({"message": "Publishing..."})
        
    elif payload.startswith('rewrite_'):
        # Async trigger rewrite
        def rewrite_task():
            new_content = ask_llm("Please rewrite in a humorous style")
            push_to_linu(f"πŸ”„ Rewrite complete:\n\n{new_content}")
        
        threading.Thread(target=rewrite_task).start()
        return jsonify({"message": "AI is rewriting..."})

    return jsonify({"error": "Unknown action"}), 400

@app.route('/hooks/reply', methods=['POST'])
def handle_reply():
    """Handle user text replies"""
    data = request.json
    # Linu Webhook structure: {"type": "reply", "text": "...", "group_id": "...", "device_token": "..."}
    
    user_text = data.get('text')  # Content replied by user
    
    print(f"Received feedback: {user_text}")

    # Async handling: Feed user reply as Prompt back to AI
    def refine_task():
        # 1. Received confirmation command
        if user_text.strip().lower() in ['ok', 'yes', 'confirm']:
             push_to_linu("βœ… Confirmed, publishing...")
             return

        # 2. Received modification command
        new_content = ask_llm(f"User feedback: {user_text}. Please modify the briefing accordingly.")
        
        # 3. Push modified version, attach buttons for next round of confirmation
        push_to_linu(f"πŸ“ Modified based on your feedback:\n\n{new_content}")
    
    threading.Thread(target=refine_task).start()

    return jsonify({"message": "AI received feedback"})

if __name__ == '__main__':
    app.run(port=5000)

Step 3: Experience the Closed-loop Workflow

  1. Morning: Your server script runs, and AI generates today's briefing.
  2. Notification: Your phone receives a push. You feel the description of the second news item is inaccurate.
  3. Feedback: You reply directly in the message box: "Highlight the price in the second news item."
  4. Refinement: Your server receives the Webhook and calls the LLM to modify the content.
  5. Update: A few seconds later, you receive a new push: "...Apple Vision Pro 2 released, priced at only $1500...".
  6. Approve: You are satisfied this time, click the [πŸš€ Publish] button below the message.
  7. Done: AI publishes the article to your CMS system and pushes a "Publish Successful" confirmation message.

This is Human-in-the-loop. AI handles the heavy lifting, while you hold the final decision-making power and fine-tuning capability via Linu.

More AI Scenario Inspirations

  • Code Review Agent:
    • GitHub code commit -> AI analyzes potential Bugs -> Pushes report.
    • Actions: [Ignore] [Create Issue] [Auto Fix].
  • Home Security Agent:
    • Camera detects stranger -> AI identifies and describes appearance -> Pushes screenshot and description.
    • Actions: [Alarm] [Open Door] [Speak].
    • Reply: "Ask who he is" -> Doorbell speaker automatically plays TTS voice.

Through Linu's flexible Webhook and Payload mechanisms, you can connect any AI Agent to your palm for collaboration anytime, anywhere.