This is an Interactive CPU Architecture Simulator that provides a hands-on, visual learning experience for understanding how a processor executes code. It models a classic 5-stage CPU pipeline (Fetch, Decode, Execute, Memory, Writeback) and allows users to write simple assembly code and observe its execution cycle-by-cycle. The tool vividly demonstrates core computer architecture concepts, including pipeline hazards (Stalls/Forwarding), the impact of cache latency (Hits/Misses), and branch misprediction, by letting users tweak hardware settings and watch the resulting performance changes.
CPU Pipeline Simulator
Ever wondered what actually happens inside your processor when it runs code? This interactive simulator visualizes the life of an instruction as it moves through a classic 5-stage CPU pipeline. It’s designed to help students and enthusiasts understand complex concepts like pipeline hazards, cache latency, and branch prediction in a visual, hands-on way.
How to Use This Tool
1. Write Your Program
On the left side, you’ll see a code editor. You can write simple assembly code using the following instructions:
MOV R1, 10– Move value 10 into Register 1.ADD R3, R1, R2– Add R1 and R2, store result in R3.SUB R3, R1, R2– Subtract R2 from R1, store result in R3.LOAD R1, 100– Load value from memory address 100 into R1.STORE R1, 100– Store value of R1 into memory address 100.JUMP 5– Jump to line 5 (0-indexed).
Tip: Click the “Load Example” button to see a pre-written program that demonstrates hazards and loops.
2. Control the Simulation
Use the control bar at the top to manage execution:
- Run/Pause: Execute the program automatically.
- Step: Advance the clock by exactly one cycle. Great for debugging!
- Speed Slider: Adjust how fast the clock ticks.
- Reset: Clears the pipeline, registers, and memory so you can start over.
3. Tweak the Hardware
Click the Settings (⚙️) icon to modify the CPU’s physical properties:
- Cache Latency: How many cycles does the CPU wait if data isn’t in the L1 Cache?
- Branch Misprediction: How often does the CPU guess the wrong path on a JUMP instruction?
- Hazards: Choose between “Stall” (realistic for simple CPUs) or “Forwarding” (advanced optimization).
Understanding the Pipeline
Modern CPUs don’t execute one instruction at a time; they work on several at once, like a factory assembly line. This simulator breaks it down into 5 stages:
1. Fetch (IF)
The CPU grabs the next instruction from memory based on the Program Counter (PC).
2. Decode (ID)
The CPU figures out what the instruction means (e.g., “Oh, this is an ADD command”) and identifies which registers are needed.
3. Execute (EX)
The actual calculation happens here. The ALU (Arithmetic Logic Unit) adds numbers, subtracts values, or calculates memory addresses.
4. Memory (MEM)
If the instruction involves RAM (like LOAD or STORE), it happens here. Watch out for Cache Misses! If the data isn’t in the fast L1 cache, the pipeline will STALL (turn red) while waiting for slow main memory.
5. Writeback (WB)
The final result is written back into the CPU registers (R0-R7), making it available for future instructions.
Key Concepts Visualized
⚠️ Data Hazards (Stalls)
If you try to use a register immediately after writing to it (e.g., ADD R1... then immediately SUB R1...), the second instruction has to wait for the first one to finish. You will see the instruction turn RED and say “STALL”. This is a “Read-After-Write” hazard.
⚡ Cache Hits vs. Misses
At the bottom, you can see the L1 Cache. When you LOAD a memory address for the first time, it’s a MISS (slow). The block turns red, and the pipeline stalls. If you access it again, it’s a HIT (fast), and the block turns green!
