← Back to Portfolio

Building AI-Powered Terminal Tools with Rust

8 min readShakirul Hasan Khan
rustaicliopenai

As developers, we spend countless hours in the terminal, often struggling to remember complex command syntax or discovering the right flags for specific tasks. This frustration led me to create Coterm - a terminal copilot that bridges the gap between natural language and command-line interfaces.

🎯 Interactive Demo Component

This component is directly imported into the MDX file and demonstrates perfect tree-shaking with @next/mdx. Only posts that import this component will include it in their bundle!

Component loaded successfully ✨

The Problem

Have you ever found yourself:

  • Googling "how to find files modified in the last 24 hours"
  • Struggling to remember tar extraction flags
  • Wishing you could just describe what you want instead of memorizing syntax

I faced these challenges daily, especially when working across different systems and tools. The solution seemed obvious: what if we could describe our intent in plain English and get the exact command we need?

Enter Coterm

Coterm is a Rust-based CLI tool that acts as your terminal copilot. Simply describe what you want to do, and it generates the appropriate command using OpenAI's GPT models.

# Instead of remembering complex syntax
coterm "find all javascript files modified in the last week"
# Returns: find . -name "*.js" -mtime -7

coterm "compress this directory excluding node_modules"
# Returns: tar -czf archive.tar.gz --exclude=node_modules .

Technical Architecture

Why Rust?

I chose Rust for several compelling reasons:

  1. Performance: Terminal tools need to be fast and responsive
  2. Memory Safety: No garbage collection overhead
  3. Cross-platform: Single binary that runs everywhere
  4. Ecosystem: Excellent crates for HTTP, JSON, and CLI interfaces

Core Components

The architecture consists of three main components:

// Simplified structure
pub struct Coterm {
    client: OpenAIClient,
    config: Config,
    history: CommandHistory,
}

impl Coterm {
    pub async fn generate_command(&self, prompt: &str) -> Result<String> {
        // Process natural language input
        // Query OpenAI API
        // Return formatted command
    }
}

API Integration

Rather than hosting our own models, I integrated with OpenAI's API through Vercel Edge Functions. This approach provides:

  • Scalability: No infrastructure management
  • Latest Models: Access to GPT-4 and newer models
  • Cost Efficiency: Pay-per-use pricing
// Vercel Edge Function
export default async function handler(req: NextRequest) {
  const { prompt, context } = await req.json();

  const completion = await openai.chat.completions.create({
    model: "gpt-4",
    messages: [
      {
        role: "system",
        content:
          "You are a command-line expert. Generate precise, safe commands based on user descriptions.",
      },
      {
        role: "user",
        content: prompt,
      },
    ],
    max_tokens: 150,
    temperature: 0.1,
  });

  return new Response(
    JSON.stringify({
      command: completion.choices[0].message.content,
    })
  );
}

Key Features

1. Natural Language Processing

Coterm understands context and intent:

coterm "show me disk usage of each directory, sorted by size"
# Returns: du -sh */ | sort -hr

coterm "kill all node processes"
# Returns: pkill -f node

2. Safety Measures

The system includes several safety features:

  • Command Review: Always shows the command before execution
  • Dangerous Command Detection: Warns about potentially destructive operations
  • Confirmation Prompts: Requires explicit approval for risky commands

3. Command Revision

If the generated command isn't quite right, you can ask for modifications:

coterm "find large files"
# Returns: find . -size +100M

# Then refine:
coterm "make that only for the current directory, not subdirectories"
# Returns: find . -maxdepth 1 -size +100M

4. One-Key Execution

The tool integrates with your shell for seamless execution:

# Generated command appears with option to execute
coterm "compress logs older than 30 days"
# Shows: find ./logs -name "*.log" -mtime +30 -exec gzip {} \;
# Press Enter to execute, Ctrl+C to cancel

Implementation Challenges

1. Context Awareness

One challenge was making the AI understand the current directory context and available tools. I solved this by:

  • Detecting the current OS and shell
  • Including relevant environment information in prompts
  • Providing context about installed tools

2. Error Handling

Network requests can fail, and API responses might be malformed. Robust error handling ensures a smooth user experience:

pub async fn generate_command(&self, prompt: &str) -> Result<String, CoTermError> {
    let response = self.client
        .post(&self.config.api_endpoint)
        .timeout(Duration::from_secs(10))
        .send()
        .await
        .map_err(|e| CoTermError::NetworkError(e.to_string()))?;

    if !response.status().is_success() {
        return Err(CoTermError::ApiError(response.status()));
    }

    // Parse and validate response
    let command = self.parse_response(response).await?;
    self.validate_command(&command)?;

    Ok(command)
}

3. Performance Optimization

To minimize latency:

  • Connection Pooling: Reuse HTTP connections
  • Caching: Store common command patterns locally
  • Async Processing: Non-blocking API calls

Results and Impact

Since launching Coterm:

  • 12+ GitHub stars and growing community
  • Positive feedback from developers across different skill levels
  • Time savings of 10-15 minutes per day for regular users
  • Learning tool for junior developers to understand command syntax

Lessons Learned

1. User Experience is Everything

Initially, I focused too much on technical features. User feedback taught me that:

  • Simple, intuitive commands matter more than complex features
  • Clear error messages are crucial
  • Safety features build trust

2. AI Integration Best Practices

Working with AI APIs taught me:

  • Prompt engineering is critical - small changes dramatically affect output quality
  • Temperature settings matter - lower values for consistent, predictable commands
  • Fallback strategies are essential when APIs are unavailable

3. Community-Driven Development

Open-sourcing Coterm led to:

  • Bug reports that improved reliability
  • Feature requests that shaped the roadmap
  • Contributions that accelerated development

Future Roadmap

I'm actively working on several enhancements:

1. Local Model Support

Integrating with local LLMs for:

  • Privacy: Sensitive commands stay on your machine
  • Offline Usage: No internet dependency
  • Customization: Fine-tuned models for specific workflows

2. Shell Integration

Deeper integration with popular shells:

  • Zsh/Bash plugins for seamless experience
  • Tab completion for Coterm commands
  • History integration with existing shell history

3. Learning System

A feedback mechanism to improve suggestions:

  • Usage tracking to understand common patterns
  • Correction learning when users modify generated commands
  • Personal adaptation to individual workflow preferences

Getting Started

Ready to try Coterm? Here's how to get started:

# Install using cargo
cargo install coterm

# Or download from GitHub releases
curl -L https://github.com/KhanShaheb34/coterm/releases/latest/download/coterm-linux-x64 -o coterm
chmod +x coterm

# Configure API key
coterm config set api-key YOUR_OPENAI_KEY

# Start using natural language commands
coterm "show git commits from last week"

Conclusion

Building Coterm has been an incredible journey combining my passion for Rust, AI, and developer tools. It demonstrates how AI can enhance rather than replace human expertise, making complex tools more accessible while teaching users along the way.

The intersection of natural language processing and system administration opens exciting possibilities. As AI models become more capable and efficient, tools like Coterm will become indispensable parts of the developer toolkit.


Have questions about Coterm or want to contribute? Check out the GitHub repository or reach out on Twitter.