# ICS 269, Fall 2004: Theory Seminar

## 8 Oct 2004:

Dynamic Speed Scaling to Manage Energy

John Augustine

Energy consumed by a processor per unit time, P(*s*), varies with the
speed *s* at which it is run in a convex manner. This has been exploited
to reduce the energy consumption by slowing down the processor. The
algorithms that are explained in this talk are based on the following
model that captures this key aspect of energy minimization. Each job is
to be executed between its arrival time and deadline by a single
processor with variable speed. The processor running at speed *s* consumes
energy per unit time P(s) = *s*^{ p}, *p* ≥ 2. We are asked to schedule the
jobs so that the total energy is minimized. The offline version can be
solved optimally in polynomial time [1]. I will provide the algorithm
and proof. [1] also provides an online algorithm with a competitive
ratio between *p*^{ p} and 2^{p-1} *p*^{ p}. In a recent improvement, Bansal et
al. in [2] have provided an online algorithm that has a competitive
ratio of *p*^{ p}. I will explain this algorithm and provide proof if time
permits.

References:

1. Frances Yao, Alan Demers, and Scott Shenker, "A Scheduling Model for
Reduced CPU Energy," FOCS 1995

2. Nikhil Bansal, Tracy Kimbrel, and Kirk Pruhs, "Dynamic Speed Scaling
to Manage Energy and Temperature," FOCS 2004