Building carflipper.ai: from 15 years of car flipping to a production AI agent

In 2024, I productised 15 years of car flipping expertise into carflipper.ai—an AI-powered agent that scans thousands of listings across every major marketplace, flags profitable deals in real time, and delivers automatic notifications. Users report $3,200 average savings per flip with a 98% satisfaction rate and 24/7 monitoring.
This is the story of how domain expertise, modern AI, and pragmatic engineering converged to solve a real problem for dealers and flippers worldwide.
The origin story
Son of a 40-year veteran mechanic and brother to a lifelong panel-beater, I grew up surrounded by engines and paint booths. For the past 15 years, I've personally bought, restored, and resold cars as a private hobby. What started as a side interest became a deep understanding of market dynamics, pricing patterns, and the art of spotting undervalued opportunities.
The problem was clear: manually scanning marketplaces, tracking prices, and identifying profitable deals required hours of daily work. Dealers and flippers needed a way to automate this process without losing the nuanced judgment that comes from experience.
In 2024, I decided to productise that domain expertise into a production-grade AI agent that could replicate—and scale—the pattern recognition I'd developed over years of hands-on experience.
What carflipper.ai does
carflipper.ai is a Telegram-based AI agent that automates the entire deal-finding process:
- Scans thousands of listings across every major marketplace in real time
- Flags cars that match profitable opportunities using AI-powered matching algorithms
- Delivers automatic deal notifications based on user-defined frequency and criteria
- Provides real-time validated market data to help with decision-making
- Operates 24/7 with continuous monitoring and instant alerts
The platform has delivered measurable results: users report $3,200 average savings per flip with a 98% satisfaction rate.
The product: feature set
AI-powered matching engine
At the core of carflipper.ai is a sophisticated matching engine that combines multiple LLMs (OpenAI, Anthropic, and Gemini) running in parallel. This multi-model approach ensures robust pricing analysis, data validation, and filtering—critical when dealing with thousands of listings across diverse marketplaces.
The engine evaluates:
- Price discrepancies against market averages
- Condition indicators and listing quality
- Geographic and market segmentation
- Historical pricing trends
Instant alerts and notifications
Users receive real-time notifications via Telegram, eliminating the need for app installations or constant marketplace monitoring. The bot interface provides a frictionless experience—dealers can set criteria, receive alerts, and act immediately.
Geographic expansion and segmentation
The platform supports geographic expansion and segmentation, allowing users to target specific markets, regions, or dealer networks. This segmentation enables more precise deal matching and reduces noise from irrelevant listings.
Real-time validated market data
Every listing is validated against real-time market data, ensuring that flagged opportunities are based on current market conditions, not outdated information. This validation layer reduces false positives and increases user confidence in the alerts they receive.
The production stack
Building a production-grade AI agent that processes thousands of listings in real time requires careful architecture decisions. Here's the stack I built and operate:
Frontend: Next.js App Router + Tailwind
The web interface is built with Next.js 14 App Router and Tailwind CSS, providing a fast, responsive experience for onboarding, configuration, and account management.
Orchestration: Redis BullMQ
Job orchestration is handled by Redis BullMQ, managing the complex pipeline of marketplace scanning, AI processing, and notification delivery. BullMQ provides reliable job processing, retry logic, and rate limiting—essential for handling marketplace APIs that have strict rate limits.
Persistence: MongoDB
MongoDB stores user configurations, historical listings, market data, and alert logs. The flexible schema allows for rapid iteration on data models as the platform evolves.
LLMs: Parallel OpenAI + Anthropic + Gemini
The AI layer uses parallel processing across three LLM providers:
- OpenAI for pricing analysis and market evaluation
- Anthropic for data validation and filtering
- Gemini for additional analysis and redundancy
This multi-provider approach improves reliability, reduces latency through parallel processing, and provides fallback options if one provider experiences issues.
Crawlers: Custom in-house solution
The marketplace crawlers are custom-built with:
- Stealth rotation to avoid detection and blocking
- JavaScript rendering for dynamic content
- Rate limiting and backoff to respect marketplace policies
- Resilient error handling for network failures
These crawlers are the foundation of the platform—they must be reliable, fast, and undetectable to maintain continuous data collection.
Payments: Stripe Checkout
Subscription management and payments are handled through Stripe Checkout, providing a seamless billing experience with support for multiple pricing tiers.
Observability: Structured JSON logs + Vercel analytics + Railway logs
Observability is critical for a production system processing thousands of listings daily. The platform uses:
- Structured JSON logs for easy parsing and analysis
- Vercel analytics for frontend performance monitoring
- Railway logs for infrastructure-level visibility
The system implements end-to-end trace IDs from crawler → LLM → alert, enabling root-cause analysis in under 60 seconds.
Technical challenges and solutions
Challenge 1: Marketplace rate limiting
Marketplace APIs have strict rate limits, and aggressive crawling can result in IP bans. The solution was stealth rotation, intelligent backoff, and distributed crawling across multiple IP addresses.
Challenge 2: Real-time processing at scale
Processing thousands of listings in real time requires efficient orchestration. BullMQ provides the job queue infrastructure, but the key was optimizing the AI inference pipeline to minimize latency while maintaining accuracy.
Challenge 3: Multi-LLM reliability
Relying on a single LLM provider introduces risk. The parallel multi-provider approach ensures that even if one provider experiences downtime, the platform continues operating with the remaining providers.
Challenge 4: Cost optimization
AI inference can be expensive at scale. The platform uses a tiered approach: lightweight filtering for all listings, with deeper analysis only for high-probability opportunities. This reduces costs while maintaining accuracy.
Results and impact
Since launch, carflipper.ai has:
- Processed thousands of listings across multiple marketplaces
- Delivered $3,200 average savings per flip for users
- Achieved 98% satisfaction rate with 24/7 monitoring
- Enabled dealers and flippers to scale their operations without proportional time investment
The platform demonstrates that domain expertise, when combined with modern AI and pragmatic engineering, can solve real problems at production scale.
Lessons learned
- Domain expertise matters: 15 years of hands-on experience provided the pattern recognition that AI could replicate and scale.
- Multi-provider redundancy: Using multiple LLM providers in parallel improves reliability and reduces single points of failure.
- Observability is non-negotiable: End-to-end traceability from crawler to alert enables rapid debugging and root-cause analysis.
- Stealth and respect: Marketplace crawlers must be respectful of rate limits and policies while remaining undetectable.
- Telegram as a platform: The Telegram bot interface eliminates app installation friction and provides a native notification experience.
What's next
carflipper.ai continues to evolve with new features, marketplace integrations, and AI improvements. The platform is live at carflipper.ai and serving dealers and flippers worldwide.
This case study is part of a series documenting real-world platform engineering projects. For more technical deep-dives and platform engineering insights, check out the blog.
