This is a companion resource to the Perplexity guide. The goal isn't to give you prompts to memorize—it's to give you prompts to run so you develop fingertip feel for how retrieval-based search differs from reasoning-based chat.
Work through these in order. Run each prompt in Perplexity. Notice what you get back. The architectural differences will become visceral.
These pairs show the same question asked two ways: one optimized for ChatGPT (reasoning-focused), one optimized for Perplexity (retrieval-focused). Try both versions in Perplexity and compare the results.
ChatGPT-style prompt (suboptimal for Perplexity):
I'm researching the electric vehicle market and need to understand the competitive landscape. Can you help me think through the major players, their strategic positioning, emerging threats from new entrants, and what this means for market consolidation over the next 3-5 years? I'm particularly interested in understanding how traditional automakers are responding to Tesla's market dominance and whether their strategies are working.
Perplexity-style prompt (optimized for retrieval):
Electric vehicle market share 2024-2025, comparing Tesla, BYD, traditional automakers. Include quarterly sales data and strategic pivot announcements from legacy manufacturers.
What's happening here: The first prompt asks Perplexity to reason about strategy. That's ChatGPT's strength. Perplexity will retrieve some documents and try to synthesize, but you're not leveraging the architecture. The second prompt targets specific retrievable facts: market share data, sales figures, announcements. You're telling the system exactly what documents to find. The reasoning about what this means—that's your job, or you do that in a follow-up with ChatGPT using Perplexity's facts as input.
ChatGPT-style prompt (suboptimal for Perplexity):
Explain how retrieval-augmented generation works, including the key architectural components, trade-offs compared to pure parametric approaches, and why companies might choose one over the other. Use examples to illustrate the concepts.
Perplexity-style prompt (optimized for retrieval):
RAG architecture papers 2024-2025, comparing vector database performance benchmarks and production deployment case studies from Anthropic, OpenAI, Google.
What's happening here: The first prompt asks for an explanation with examples—that's asking the system to generate educational content, which is reasoning work. The second prompt hunts for specific artifacts: recent papers, benchmarks, case studies. You get primary sources instead of synthesis. If you want explanation after, you take those sources to ChatGPT.
ChatGPT-style prompt (suboptimal for Perplexity):