GPT-5 Codex
GPT-5 Codex is OpenAI's latest model, offering advanced reasoning and code generation capabilities.
Overview
GPT-5 Codex provides:
- Advanced reasoning - Complex problem solving
- Large context - Understand big codebases
- Code expertise - Trained on vast code corpus
- Tool use - Function calling capabilities
When to Use Codex
Ideal For
- ✅ Complex algorithms
- ✅ Architecture decisions
- ✅ Code optimization
- ✅ Advanced debugging
- ✅ Large refactoring
Less Ideal For
- ⚠️ Budget-conscious projects (expensive)
- ⚠️ Simple tasks (overkill)
- ⚠️ Real-time applications (slower)
Setup
API Key
- Go to platform.openai.com
- Create an API key
- Add billing (required for GPT-5)
export OPENAI_API_KEY="your-openai-api-key"
Configuration
{
"agents": {
"codex": {
"enabled": true,
"model": "gpt-5",
"autonomy": "workspace-write"
}
}
}
Model Options
| Model | Context | Speed | Cost |
|---|---|---|---|
gpt-5 | 128K | Medium | $$$$ |
gpt-4-turbo | 128K | Fast | $$$ |
gpt-4o | 128K | Faster | $$ |
Usage
From UI
- Open task
- Click "Run Agent"
- Select "GPT-5 Codex"
- Start
From CLI
friday-dev run --task 123 --agent codex
Capabilities
Code Generation
Codex excels at generating complex code:
## Task
Implement a binary search tree with:
- Insert, delete, search operations
- In-order, pre-order, post-order traversal
- Self-balancing (AVL or Red-Black)
- TypeScript with full type safety
Algorithm Optimization
## Task
Optimize the current O(n²) search to O(log n)
Current implementation is in src/search/linear.ts
Suggest data structure changes if needed
Architecture Design
## Task
Design the event-driven architecture for:
- Real-time notifications
- Message queuing
- At-least-once delivery guarantee
- Horizontal scaling support
Example Output
Complex Algorithm
Task: "Implement LRU cache with O(1) operations"
/**
* LRU Cache implementation using HashMap + Doubly Linked List
*
* Time Complexity:
* - get: O(1)
* - put: O(1)
*
* Space Complexity: O(capacity)
*/
class LRUCache<K, V> {
private capacity: number;
private cache: Map<K, DoublyLinkedListNode<K, V>>;
private head: DoublyLinkedListNode<K, V>;
private tail: DoublyLinkedListNode<K, V>;
constructor(capacity: number) {
this.capacity = capacity;
this.cache = new Map();
// Sentinel nodes
this.head = new DoublyLinkedListNode(null as K, null as V);
this.tail = new DoublyLinkedListNode(null as K, null as V);
this.head.next = this.tail;
this.tail.prev = this.head;
}
get(key: K): V | undefined {
const node = this.cache.get(key);
if (!node) return undefined;
// Move to front (most recently used)
this.removeNode(node);
this.addToFront(node);
return node.value;
}
put(key: K, value: V): void {
if (this.cache.has(key)) {
// Update existing
const node = this.cache.get(key)!;
node.value = value;
this.removeNode(node);
this.addToFront(node);
} else {
// Add new
const node = new DoublyLinkedListNode(key, value);
if (this.cache.size >= this.capacity) {
// Evict least recently used
const lru = this.tail.prev!;
this.removeNode(lru);
this.cache.delete(lru.key);
}
this.addToFront(node);
this.cache.set(key, node);
}
}
private addToFront(node: DoublyLinkedListNode<K, V>): void {
node.next = this.head.next;
node.prev = this.head;
this.head.next!.prev = node;
this.head.next = node;
}
private removeNode(node: DoublyLinkedListNode<K, V>): void {
node.prev!.next = node.next;
node.next!.prev = node.prev;
}
}
class DoublyLinkedListNode<K, V> {
key: K;
value: V;
prev: DoublyLinkedListNode<K, V> | null = null;
next: DoublyLinkedListNode<K, V> | null = null;
constructor(key: K, value: V) {
this.key = key;
this.value = value;
}
}
Best Practices
Leverage Context Window
Codex can handle large contexts:
## Task
Refactor the entire authentication module
## Context
Include all files:
- src/auth/*.ts (8 files)
- src/middleware/auth.ts
- src/types/auth.ts
- tests/auth/*.test.ts
Maintain backward compatibility with existing API
Ask for Explanations
## Task
Optimize database queries in UserService
## Requirements
- Explain current performance issues
- Show before/after query plans
- Document the optimization strategy
Pricing
GPT-5 is OpenAI's premium model:
- Higher cost per token
- Check openai.com/pricing
- Monitor usage in settings
Cost Optimization
- Use GPT-4o for simpler tasks
- Reserve GPT-5 for complex problems
- Be specific to reduce token usage
- Use shorter context when possible
Comparison
| Feature | Codex | Claude | Gemini |
|---|---|---|---|
| Reasoning | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Speed | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Cost | $ | $$$ | Free/$ |
| Context | 128K | 200K | 1M |
Troubleshooting
API Errors
Error: insufficient_quota
Solutions:
- Add billing to OpenAI account
- Check spending limits
- Use a different model
Slow Responses
GPT-5 prioritizes quality:
- Use GPT-4o for faster responses
- Reduce context size
- Break into smaller tasks