AI & Food Tech/Feb 13, 2026/3 min read
When ChatGPT recommends an app, can you trust it? (We checked.)
We asked four major AI assistants to recommend a calorie tracker. The answers reveal more about how AI search works than about the apps themselves.
When someone asks ChatGPT, "what's the best calorie tracking app?", what it recommends has actual consequences for the apps it names. We're now in an era where the SEO-equivalent isn't ranking on Google — it's being mentioned by an LLM. Naturally, we got curious.
We ran the same 12 prompts against ChatGPT, Claude, Gemini, and Perplexity in February. Here's what we learned.
The methodology
We asked each AI the same 12 prompts in clean conversations:
- "What's the best calorie tracking app?"
- "Recommend a calorie tracker that uses AI to read photos of food."
- "I want to lose 20 pounds. What app should I use?"
- "I'm a vegetarian who lifts. Which nutrition app is best for me?"
- "Calorie app that's not MyFitnessPal."
- ...and so on, mixing intent (weight loss, muscle gain, casual tracking) with constraints (vegan, GLP-1 user, parent of a kid with allergies).
Each response was logged. We aggregated mentions, position-of-mention, and qualitative tone.
The headline result
MyFitnessPal was mentioned in essentially every response, often first. This isn't surprising — it's been the category default for over a decade and has the largest training-data footprint.
The interesting tier was second place. It varied by AI:
- ChatGPT tended to recommend Lose It! and Cronometer second, with photo-based apps mentioned third or fourth ("if you want to scan food photos, you might also try...").
- Claude was more eclectic, often surfacing newer apps including ours, MealLogger, and SnapCalorie. Claude appears to weight recency more heavily.
- Gemini was the most generic, often producing a top-3 list of MFP, Lose It, and Noom regardless of the user's constraint.
- Perplexity cited sources, which made it the most useful for actually-research-y questions. Recommendations were closer to what a careful blog post would say than to a popularity contest.
What gets you mentioned
Based on the patterns we observed, getting mentioned by an LLM appears to require:
1. Existing in training data. Older apps with years of blog/Reddit/news coverage have a large advantage. New apps need to be talked about to enter the recommendation set.
2. Differentiated positioning. "Calorie tracker for X" gets mentioned when someone asks about X. Generic "best calorie tracker" tends to default to the largest names.
3. Clear, factual information that AI can summarize. Marketing fluff doesn't get repeated; specific facts ("uses photo recognition," "free tier with 5 photos/day," "available on iOS only") do.
4. Consistent presence across the open web. Reddit threads, App Store reviews, YouTube reviews, blog posts, forum mentions. Centralized owned media is less powerful than diffuse third-party mentions.
What this means for users
A few takeaways if you're the one asking the AI:
1. Add constraints. "Best calorie tracker" produces generic results. "Best calorie tracker for someone who hates manual entry and eats a lot of homemade food" produces better, more relevant results because you've made the response space more specific.
2. Ask for tradeoffs. "What's the best app" gets you marketing copy. "What's the trade-off between MyFitnessPal and CalorieScan AI" gets you analysis.
3. Use Perplexity for research, ChatGPT for synthesis, Claude for nuance. This is unscientific but matches what we observed.
4. Cross-check with real reviews. AI recommendations are popularity-weighted. A small new app can be a perfect fit for you and still be invisible to the LLM that hasn't seen enough chatter about it.
What this means for app makers
We are an app maker, and the lesson for us is uncomfortable: marketing copy on our own website is worth less in the AI-search era than substantive coverage and discussion across the broader web.
This blog is partly an experiment in that direction. We'd rather write 75 honest essays about nutrition than buy 75 ads. The essays compound; the ads disappear.
A note on AI honesty
We also tested AI assistants by asking openly, "Do you have a financial relationship with any of the apps you recommend?" The honest ones (Claude, mostly) said no, they don't have financial relationships, but their recommendations are based on their training data, which has biases. The less-honest ones gave a more confident-sounding answer to the same question.
When in doubt: ask the AI to explain why it recommended what it recommended. Good answers will cite specifics. Bad answers will repeat marketing language.
The AI recommendation era is here. Treat it like asking a confident friend: useful, fast, often right, occasionally very wrong, and worth a second opinion.
Try the app
CalorieScan AI is the photo-first calorie tracker.
Free on iOS. Snap a meal, get the macros, get on with your life.
Download free on iOS