Various Optimization Strategies

I’ve been a bit busy lately, and my blogging has suffered because of it. That’s really too bad, because I have a big list of topics / code that I’d like to post about, but most of them require at least a few hours of work to clean up and flesh out. It’s a bit overwhelming.

In the meantime, I wanted to do a quick post on a couple of interesting optimization strategies, especially ones I’ve come across here at Microsoft. These strategies are well-known but some are used quite often in Visual Studio to help speed up performance, so I thought I’d write about them, just for fun. Maybe you’ll learn something, or maybe you’ll already know all of these.

Copy-On-Write

Copy-on-write is an optimization strategy which delays the copying of two or more initially identical resources until a user of the resource attempts to modify it. When a resource user does attempt to modify the resource, a true private copy is performed behind the scenes and returned to the requestor. This is an optimization especially in cases where resources rarely need to be modified, but you cannot guarantee they won’t be at some point.

One example of this strategy in use is in virtual memory operating systems. Multiple copies of the same process generally have identical binary code in system memory, so the memory used by the first process is marked copy-on-write when additional instances of the process are loaded. If any of these processes do attempt to modify the memory, then the kernel detects the write and makes a transparent copy of the binary code so that the other processes are unaffected.

Copy-on-write can also be used in application code. One possible implementation could involve creating a class which wraps a property defined in a read-only data source. As long as callers ask to read the resource, you can simply return the value stored in the data source. But if a caller wants to modify the value, then the class makes a private local copy and sets a flag indicating that from then on it should return the local copy instead of the data source value.

Here’s some sample code (C++):

class StringWrapper
{
private:
char* s;
bool local;
public:
StringWrapper(char* source) : local(false) { s = source; }
char* Get() { return s; }
void Append(char* app)
{
if (!local)
{
char* temp = new char[strlen(s) + 1];
strcpy(temp, s);
s = temp;
local = true;
}

strcat(s, app);
}
};

Lazy Initialization

The idea behind lazy initialization is that you should try to limit the amount of pre-processing you have to do to get your program to run, until the last possible moment. So you initialize resources the first time they are requested, and no sooner.

For example, let’s say that you have a word processing program that supports spell-checking for hundreds of languages. Each separate language can be loaded from a dictionary file into a language-specific spell-checker object, from which you can call your spell-check method. You could just load every single dictionary file from disk when your word processor loads, but that would probably take a long time, and most of the languages are unlikely to be used in a single user session. In fact, the user might not even use the spell-checker at all. A better strategy would be to wait until the user requests a spell-check, determine whether or not the current language’s dictionary has already been loaded, and load it if it hasn’t yet. Then, you can proceed with your spell-check knowing that the user only incurred the cost of loading a single language at the time that they needed it.

Of course, you have to strike a balance here. If you lazily initialize everything in your program, then there’s a good chance that everything the user does will be a bit sluggish. Would the user be willing to wait a little bit longer for your program to load if it means that the most common operations are quicker? This seems especially relevant for operating systems. Slow boot times are annoying, but if you only reboot once a week, then a few extra seconds of pre-processing might be a good tradeoff for better OS performance.

Here’s some example code (C#):

public class MathConstants
{
private bool piInit = false;
private double pi;

public double PI
{
get
{
if (!piInit)
{
pi = CalcPi();
piInit = true;
}

return pi;
}
}
}

Double-Checked Locking

This pattern is often used in conjunction with lazy initialization, but in a multi-threaded environment. For example, in C# you might have a singleton class like so:

public class Singleton
{
private static Singleton instance;
private static Object lockObject = new Object();

public static Singleton Instance
{
lock (lockObject)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}

This code will prevent multiple threads who are accessing Singleton.Instance at the same time from accidentally creating multiple copies of the singleton. But there is an overhead cost to acquiring the lock on every access, even though the singleton instance is only initialized once! So a better implementation might perform an “unsafe check” first, before acquiring the lock, like so:

public class Singleton
{
private static volatile Singleton instance;
private static Object lockObject = new Object();

public static Singleton Instance
{
if (instance == null)
{
lock (lockObject)
{
if (instance == null)
instance = new Singleton();
}
}

return instance;
}
}

Notice that the test if (instance == null) must be performed twice because two threads could still get past the first if at the same time (since there is no lock preventing them from doing so), and then sequentially create two different singletons, one after the other. Also, the volatile keyword must be used on the instance, indicating that the field might be modified in the program by a concurrently executing thread. This prevents a thread from “caching” the value of instance, and ensures that each thread always retrieves the current value of the object at the point it is requested.

Caching / Lookup Table

Let’s say you want to write a screensaver which will draw tons of circles of random sizes, colors, and positions all over the screen. And let’s say that your algorithm for drawing the arcs involves a loop that looks something like this:

void DrawCircle(int r, int x, int y, Color color)
{
for (double d = 0.0; d <= 2 * PI; d += 0.001)
{
int x1 = x + cos(d) * r;
int y1 = y + sin(d) * r;
SetPixel(x1, y1, color);
}
}

Okay, so for each circle, you need to call cos and sin about 6283 times. Let’s say you’re extremely interested in runtime performance, and want to make this function faster. One thing you can do is to pre-compute the sin/cos values at the beginning of your program and store them in memory, so that instead of calling a potentially expensive math function for each circle, you just access memory instead, which is always fast.

Depending on the operation, you might even try pre-computing these values before the program even starts up, writing the values to a file or putting them directly in source, depending on how many values you need.

void DrawCircle(int r, int x, int y, Color color)
{
int i = 0;
if (!initTrig)
{
for (double d = 0.0; d <= 2 * PI; d += 0.001)
{
costable[i] = cos(d);
sintable[i++] = sin(d);
}
initTrig = true;
}

for (i = 0; i <= 6283; i++)
{
int x1 = x + costable[i] * r;
int y1 = y + sintable[i] * r;
SetPixel(x1, y1, color);
}
}

References

Copy-on-Write

Lazy Initialization

Implementing Singleton in C#

volatile (C#)

Published by

HexarA

Seattleite. Climber. Snowboarder. Traveler. Party rocker. Technologist. Spanish enthusiast. Fun-seeker.

2 thoughts on “Various Optimization Strategies”

  1. Awesome post! I have recently become a huge fan of lazy loading database objects on PHP classes. The application I am currently working on is a large social networking site that has a huge data model, much of which is never touched in a given request. Lazy loading allows us to avoid most of our expensive database calls unless we absolutely need them.

    Like

  2. Tradeoffs tradeoffs tradeoffs. If the lazy hardware engineers would just give us fast enough machines, with just enough RAM, we wouldn’t have to worry about any of this. 🙂

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s