After a recent talk at Cambridge NxtGenUG I was asked what the effect on SQLServer performance of gathering a trace in batch mode as opposed to using the GUI.
The obvious answer, as always with these things, is “it depends”. But I was intrigued to see if I could quantify that a little further.
The batch profiling is done by setting up a server-side Profiler trace. There is a post on how to do this here. Although ‘Export – Script Trace Definition’ from the SQLProfiler GUI does the job for you.
I have a little test rig here running SQLServer 2008 R2 and a load testing machine. I ran a simple load test creating orders. It had 20 threads and produces 4000 orders in just over 11 minutes.
I ran the whole test several times in three different scenarios:
- No tracing
- A GUI based Profiler trace logged data to screen and file on a remote machine
- A Server-side Profiler trace logging data to a local file on the server
The two traces logged the same 190,000+ events and captured the same columns.
My findings aren’t very surprising but here they are:
|No tracing||11min 25sec|
|GUI Profiler||12min 02sec|
|Server Profiler||11min 44sec|
So the GUI profiling slowed the run down by 5.4% and the Server Profiler only by 2.8%.
The one thing that did surprise was the server profiler trace file, while holding all the same information, was 80% of the size of the file produced by the GUI.
The GUI has some advantages over Server Profiler:
- it is easier to use
- simpler to adjust the information being gathered
- traces can be written directly to a database table
- traces files are saved on the client PC, no server file system access required
Also in other cases the load the SQL Profiler put on the server would be low enough not to raise concerns.
However we can see that using server-side profiling, logging to a local file on the server, can give you a real performance advantage over using the GUI.