<aside>

On this page:

</aside>

<aside>

Expanse Docs

🧠 Product Docs

🤖 Prompting Tips

LLM Directory

Prompting Tips & Tricks

Prompt Library

💻 Use Cases

For Work

For Fun

📚 Resources

Prompt Engineering Resources

🍄 About Expanse

Meet the team

</aside>

Tips & Tricks to Coding with AI

Coding with AI can dramatically accelerate your workflow, but poor prompting habits can lead to problematic code and long and frustrating conversations with the AI. Developing good AI coding habits will save you tokens, time and resources all while receiving fast and valuable AI responses to speed up your coding workflows.

<aside> 🥡

Key Takeaways

  1. Start new Threads often
  2. Select the right LLM
  3. Use Focus Mode
  4. Keep Temperature at Zero
  5. Upload Relevant Files & Info
  6. Make your code AI-Friendly </aside>

1. Start new Threads often

Every message you send provides the AI with the information and ‘context’ to generate a relevant response. However, each time you send a message, your entire chat history gets sent too**.** Therefore, longer conversations have a higher likelihood of confusing previous messages/contexts which can cause it to output broken or irrelevant code.

To combat this, keep your Threads concise and start new Threads at the start of every task. This way, you will reduce the likelihood of the AI incorrectly drawing on previous contexts.

Why this matters

Tips on using Threads

Summarize your Thread

If you need to draw on context from previous Threads, ask the AI for a summary at the end of a chat, then refine and reuse it in future conversations. Simply ask the AI to summarize your discussion, copy that summary to a fresh thread, and say "Let's continue from here." See the prompt below.

image.png

<aside> 💡

Edit Messages to Branch Responses

When you need to modify multiple files, or want to compare results, consider Editing the Message and branching the conversations.

Now, you can switch between results while preserving the same context from previous messages.

Edit message toggle.png


2. Select the right LLM

Not all AI models are created equal when it comes to coding tasks. Picking the LLM can make all the difference in how smoothly your AI-assisted coding flows.

Claude Opus excels at understanding complex codebases and providing detailed explanations, while o3 is great at specialized tasks and technical reasoning. For longer code, Gemini 2.5 Pro offers exceptional context window.

There are many benchmarks and leaderboard that compare coding models by performance and cost, such as https://aider.chat/docs/leaderboards/. You can try these different LLMs in Expanse to see which fits your project’s needs best.

image.png

3. Use Focus Mode

Giving the AI too much previous context in the form of multiple messages with long blocks of code can cloud its ability to generate relevant responses.

To keep your responses focused and relevant, use Expanse in Focus mode, this way, the AI will only respond to the current message sent without interpreting any of the previous messages in the Thread.

image.png

<aside> 💡

You can seamlessly switch modes by toggling the chat/focus button located in the top of the window, or using the Chat Editor.

</aside>

Why this matters


4. Keep Temperature at Zero

Adjusting an LLM’s temperature controls the creativity of an AI’s responses. Since most coding workflows should not require much “creativity,” you should set your temperature to 0 to produce more focused, concise, and precise outputs.

Screenshot 2025-06-19 at 2.40.20 PM.png