The problem with using the built in delay is that CPU clock cycles are essentially wasted. In such a simple program like we had in the previous post, we are not worried about conserving clock cycles but you can imagine that in a very complex program you would want every clock cycle to do something. Once interrupts are set up and enabled, they will "interrupt" the program when a certain condition is met. In the case of delay, a timer counts clock cycles and once a certain number is reached, the program is interrupted and carries out a "service routine" specified by the programmer. Once the service routine is finished, the program returns to the line of code it was executing when the interrupt occurred.
To illustrate the difference between the two types of delays, I wrote a program that does the exact same thing as the program in the previous post but it uses interrupts rather than the built in delay. This program simply makes the built in LEDs turn on and off at a rate that is perceptible to the human eye:
#include <avr/io.h>
#include <avr/interrupt.h>
int main(void)
{
DDRB = 0xFF; //PORT B (LEDs) output
TCNT0 = 0; //timer starts at zero
OCR0 = 200; //and counts up to 200
TCCR0 = 0b00001101; //CTC mode, internal clk, 1024 prescaler
sei(); //global interrupt enable
TIMSK = (1<<OCIE0); //timer0 overflow interrupt enable
while(1)
asm("nop"); //stay here....
}
ISR (TIMER0_COMP_vect)
{
PORTB ^= 0xFF; //toggle PORTB (LEDs)
}