A Mess of Nested Transactions

Transactions are an area that I often find are used badly, or not used at all. Transactions without any error handling, nested transactions, named transactions named for no good reason or just plain lack of transactions where they should be used.

This is going to be the first of three posts looking at some aspects of transactions and transaction management. And deserving of the first post is that most misunderstood of things, the nested transaction.

Before getting into the details, I need to make one thing clear. Nested transactions are a lie. They do not exist in SQL Server.

A piece of code may appear to have multiple nested transactions in it. It may have multiple BEGIN TRANSACTION statement in it. It only has one transaction though.

An easy way to show this would be to look at the transaction log. Since transactions are there to make multiple statements atomic (committed or rolled back as a unit), the transactions must be reflected in the transaction log to allow SQL to know where to roll things back to or what statements need to be undone/redone as a unit.

CREATE TABLE TestingTransactions (
SomeNumber INT,
SomeString VARCHAR(50)
);
GO

INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
VALUES  (0, 'Zero');

CHECKPOINT; -- mark the log as reusable

BEGIN TRANSACTION

	INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
	VALUES  (1, 'One');

	BEGIN TRANSACTION

		INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
		VALUES  (2, 'Two');

		BEGIN TRANSACTION

			INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
			VALUES  (3, 'Three');

		COMMIT TRANSACTION

	COMMIT TRANSACTION

COMMIT TRANSACTION

SELECT Operation, Context, [Transaction ID], AllocUnitName, [Transaction Name]
FROM fn_dblog(NULL, NULL);

GO

I’m inserting a row before the transaction because allocating the first page to a table generates a lot of log entries. Those log records are not relevant to the subject at hand and will just confuse things.

One table, three transactions, three inserts, three commits. Let’s see what went into the transaction log.

TransactionLogNestedTransactions

What went into the log was a single begin transaction, a single commit and three inserts. The other two begin transaction statements, and the other two commit transaction statements were completely ignored for the purposes of the transaction log.

The other two begin transaction statements did do something. Not very much, and nothing that reflected in the transaction log, but they did do one thing, and if I make the code a little more complicated it’ll become clear what that one thing is.

CHECKPOINT; -- mark the log as reusable

BEGIN TRANSACTION

	INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
	OUTPUT inserted.*
	VALUES  (1, 'One');
	SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran1;

	BEGIN TRANSACTION

		INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
		OUTPUT inserted.*
		VALUES  (2, 'Two');
		SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran2;

		BEGIN TRANSACTION

			INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
			OUTPUT inserted.*
			VALUES  (3, 'Three');
			SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran3;

		COMMIT TRANSACTION
		SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran1;

	COMMIT TRANSACTION
	SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran2;

COMMIT TRANSACTION
SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran3;

TranCount

Each BEGIN TRANSACTION incremented the value returned by @@TranCount, each COMMIT TRANSACTION decremented the value returned by @@TranCount. If I looked at the transaction log after each statement, the BEGIN_XACT log entry corresponds to the BEGIN TRANSACTION which incremented @@Trancount from 0 to 1, and the COMMIT_XACT log entry corresponds to the COMMIT TRANSACTION which decremented @@Trancount from 1 to 0.

Only the operations which changed @@Trancount to or from 0 are reflected in the transaction log, they’re the only ones which have any effect on what becomes part of the database’s durable state.

Does ROLLBACK then follow the same rules? Does it get ignored unless it’s the last statement of multiple COMMIT/ROLLBACK statements?

CHECKPOINT; -- mark the log as reusable

BEGIN TRANSACTION

	INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
	OUTPUT inserted.*
	VALUES  (1, 'One');
	SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran1;

	BEGIN TRANSACTION

		INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
		OUTPUT inserted.*
		VALUES  (2, 'Two')
		SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran2;

		BEGIN TRANSACTION

			INSERT INTO dbo.TestingTransactions (SomeNumber, SomeString)
			OUTPUT inserted.*
			VALUES  (3, 'Three')
			SELECT @@TRANCOUNT AS TransactionCountAfterBeginTran3;

		ROLLBACK TRANSACTION
		SELECT @@TRANCOUNT AS TransactionCountAfterRollbackTran1;

	COMMIT TRANSACTION
	SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran2;

COMMIT TRANSACTION
SELECT @@TRANCOUNT AS TransactionCountAfterCommitTran3;

SELECT Operation, Context, [Transaction ID], AllocUnitName, [Transaction Name]
FROM fn_dblog(NULL, NULL);

 

RollbackTransaction

TransactionLogRollbackTransaction

No. The ROLLBACK TRANSACTION set the value of @@Trancount directly to 0 and undid all of the inserts back to the outermost BEGIN TRANSACTION. The COMMIT TRANSACTION statements which ran after it all threw errors

Msg 3902, Level 16, State 1, Line 39
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.

This is the danger of nesting transactions, that one ROLLBACK, anywhere after the first BEGIN TRANSACTION will undo everything right the way back to that first BEGIN TRANSACTION. These are not autonomous transactions being run one inside another, there’s a single transaction and rollback anywhere will roll back everything.

This can be a debugging nightmare when the application creates a transaction and calls a procedure which begins a transaction which calls other procedures which begin and commit transactions. If something goes wrong in one procedure and that procedure rolls the transaction back, it can end up undoing a lot more work than the developer of that procedure may have expected.

In general I would recommend only starting and committing transactions at a single level, and making transaction handling part of the application’s design, not something that falls out of the implementation by accident. If there’s an intentional design around how transactions will be handled (and choosing not to have any is not a good idea), then it’s a lot less likely that there will be transaction-related bugs that need to be found and fixed.

And what happens when working with an existing app where this wasn’t done? I’ll look at that in the next post on transactions, on conditionally starting transactions and on the use of savepoints.

Of clustered indexes and ordering

There is a particularly irritating and persistent belief that indexes (usually it’s the clustered that gets picked on) are always physically ordered within the data file by the key columns. That is, that the data within the database file is always ordered by the key column.

It doesn’t help that official documentation states this ‘fact’.

I’m going to diverge from my usual methodology of first proving (or disproving) a statement and then explaining it in this case.

Do indexes (clustered or non-clustered) define the physical storage order of the rows?

No, absolutely not.

What indexes do is provide a logical ordering, a collection of pointers, that allow the storage engine to retrieve data from an index ordered by the index key, but that’s logical ordering, it specified nothing regarding the physical ordering.

The index structure is such that the page with key values 4, 5 and 6 will appear earlier in the index’s logical ordering than the page with key values 10,11 and 12. Where these pages are in the file is not defined at all. The page with key values 10,11 and 12 could be page 240 in the database file while the page with key values 4, 5 and 6 could be page 655.

On the data pages themselves there’s no guarantee that the row with the key value 4 will appear earlier on the page than the row with the key value of 6. 6 could be the first row on the page and 4 last and that would be just fine.

Let’s prove this. Time for DBCC page and some undocumented commands.

First up, the order of rows on the page. I’m going to create a table in a nice new database (so that there are no other tables around messing things up) and populate it with some data.

CREATE TABLE OddandEven (
SomeNumber INT,
Filler CHAR(500) DEFAULT ' '
) ;
GO

CREATE UNIQUE CLUSTERED INDEX idx_SomeNumber ON OddandEven (SomeNumber);
GO

INSERT INTO OddandEven (SomeNumber)
SELECT TOP (50) (ROW_NUMBER() OVER (ORDER BY object_id))*2 - 1 FROM sys.objects;

INSERT INTO OddandEven (SomeNumber)
SELECT TOP (50) (ROW_NUMBER() OVER (ORDER BY object_id))*2 FROM sys.objects;

So what I’m doing there is simply inserting 50 odd numbers first and 50 even numbers second

A quick check with DBCC IND shows me that page 89 of this database is a data page for this table. I’m going to use dump style 2 for DBCC Page, because I want a raw binary dump with no interpretation (I’m removing the portions that are just the filler, as that’s just intentionally wasted space)

000000000EB6AC50:   20020000 1000fb01 37332020 20202020 † …..û.73
000000000EB6AE40:   20202020 20202020 20202020 20202002 †               .
000000000EB6AE50:   00001000 fb013735 20202020 20202020 †….û.75
000000000EB6AE60:   20202020 20202020 20202020 20202020 †
000000000EB6B040:   20202020 20202020 20202020 20020000 †             …
000000000EB6B050:   1000fb01 36342020 20202020 20202020 †..û.64 
000000000EB6B060:   20202020 20202020 20202020 20202020 †
000000000EB6B240:   20202020 20202020 20202002 00001000 †           …..
000000000EB6B250:   fb013636 20202020 20202020 20202020 †û.66
000000000EB6B260:   20202020 20202020 20202020 20202020 †

Hmm… 73, 75, 64, 66. That’s not the correct physical ordering… What happened here is that I inserted the odd values first, they were written to the pages then when I wrote the even numbers the pages had to split (firstly) leaving them probably around 50% full, then the even numbers were added in the empty space. SQL doesn’t reorder the rows on the page (that would be expensive).

What keeps track of the logical ordering, what rows should be read first, second, etc. to get the results back in logical ordering, is the slot array at the end of the page

OFFSET TABLE:
Row - Offset
 14 (0xe) - 7236 (0x1c44)
 13 (0xd) - 3666 (0xe52)
 12 (0xc) - 6726 (0x1a46)
 11 (0xb) - 3156 (0xc54)
 10 (0xa) - 6216 (0x1848)
 9 (0x9) - 2646 (0xa56)
 8 (0x8) - 5706 (0x164a)
 7 (0x7) - 2136 (0x858)
 6 (0x6) - 1626 (0x65a)
 5 (0x5) - 5196 (0x144c)
 4 (0x4) - 1116 (0x45c)
 3 (0x3) - 4686 (0x124e)
 2 (0x2) - 606 (0x25e)
 1 (0x1) - 4176 (0x1050)
 0 (0x0) - 96 (0x60)

That tells me that the row with the lowest key value is found at offset 0x60, the next lowest at offset 0x1050, then 0x25e, etc. The rows are not stored on this page in physical order, the slot array defines the logical order so that anything needing the rows in logical order of the index, can read them off the page that way.

That answers the question about rows on a page. Let’s now look at whether pages are always stored in physical order within the data file.

I’m going to drop the OddandEven table and create a new table with the rows sized so that only a few rows fit onto a page.

CREATE TABLE PagePhysicalOrder (
  SomeNumber INT,
  Filler CHAR(800) DEFAULT ' '
);

CREATE UNIQUE CLUSTERED INDEX idx_TestingPhysicalOrder ON PagePhysicalOrder (SomeNumber)

DECLARE @i INT = 9;
WHILE @i >= 0
  BEGIN
    INSERT INTO dbo.PagePhysicalOrder (SomeNumber, Filler)
    SELECT TOP (10)
      ROW_NUMBER() OVER (ORDER BY (SELECT 1)) +@i*10,''
      FROM sys.objects;

    SET @i = @i - 1;
  END

That gets me 100 rows in the table, written in groups of 10, with the higher values for SomeNumber being inserted first. Now, to find where the rows are stored, I’m going to use the sys.fn_PhysLocFormatter function and the %%physloc%% virtual column. See http://www.sqlskills.com/blogs/paul/sql-server-2008-new-undocumented-physical-row-locator-function/ for more details on these.

SELECT SomeNumber,
sys.fn_PhysLocFormatter(%%physloc%%) AS RowLocation
FROM dbo.PagePhysicalOrder

RowPhysicalLocations

The output of the PhysLocFormatter is FileID : Page Number : Slot Index. The output shows the rows with SomeNumber 75, 76, 77 and a few others are on page 197 while rows with a lower SomeNumber (65-70) are on page 248, further into the data file than the page containing the larger values of SomeNumber.

Hence we can say that the clustered index doesn’t enforce the physical order of the pages in the data file either.

The only thing that the clustered index (or nonclustered indexes) enforce is what values belong on a page together. If we have a table with an index on an integer column, we cannot have a situation where rows with a key value of 1, 2, 4, 8, 9 are on one page and rows with a key value of 3, 5, 6, 7 and 10 are on another. If only 5 rows fit onto a page, one page will have 1, 2, 3, 4 and 5 and another page will have 6, 7, 8, 9 and 10.  The physical order of the rows on those pages is irrelevant, as is the physical order of those two pages in the data file.

I suspect this myth came about because, when SQL creates or rebuilds an index, it will try as best as possible to put the pages of the index down in physical order of the index key. Doing so reduces logical fragmentation and allows the read-ahead reads to work as efficiently as possible. This applies only when the index is created, rebuilt or reorganised, not during regular operations.

Pass Summit 2015

It’s two weeks until Pass Summit 2015!

It feels a little weird this year, it’s the first year in quite some time where I’m not presenting in Seattle. The reason for that’s pretty simple, I didn’t submit any abstracts. It still feels strange to be going as an attendee, to be able to pick and chose sessions as I like, rather than having to work around the slots that I have to speak in, and to be able to spend time just wandering around the conference rather than rushing from place to place as is usual.

Still, that gives me a good opportunity to attend sessions I’m interested in. Looking over the schedule, my 5 must-attend sessions are (in no particular order)

Plus the keynote with Dr DeWitt and Dr Nehme is an event not to be missed!

Of course, there aren’t just the sessions. There’s the Birds of a Feather lunch, the Women in IT lunch, the SQL Clinic and the CAT team to ask difficult questions of, and the hundreds and thousands of ad-hoc conversations in the corridors, many of which may be more interesting than the sessions themselves. Not to mention the multitude of parties and events in the evenings.

Oh, and there will be Crazy Hat day as well. For those who don’t know what I’m taking about, it’s Argenis Without Borders, version 2. Please donate if you can, I really want to see Argenis dancing to music from a trombone.

Index selectivity and index scans

There was a question raised some time back ‘If an index is not selective, will the query operators that use it always be index scans’?

It’s an interesting question and requires a look at what’s going on behind the scenes in order to answer properly..

Short answer: No, not always.

Long answer…

Selectivity

Selectivity is a measure of what portion of the table satisfies a particular query predicate. The Microsoft whitepaper on statistics as used by the query optimiser defines selectivity as follows.

The fraction of rows from the input set of the predicate that satisfy the predicate. More sophisticated selectivity measures are also used to estimate the number of rows produced by joins, DISTINCT, and other operators.

Bart Duncan wrote a nice detailed blog post a while back explaining the difference between density, selectivity and cardinality. In summary, indexes have density, a measure of how unique the left-based column subsets within them are; predicates have selectivity, a measure of what portion of the table they affect; operators have cardinality, a measure of how many rows the operator processes.

Indexes cannot be said to be selective or not, they can only be said to have a high or low density. It is possible for a predicate on a very low density column (unique) to have a very poor selectivity (large percentage of the table affected) Imagine ID > 0 where ID is an int identity column. The column is unique, but the predicate affects the entire table. Low density (which is good), but poor selectivity.

So let’s alter the original question. “If an index has a high density (not very unique, lots of duplicate values), will query operators against it always be index scans rather than index seeks?”

Seek and Scan

Before we go on, I want to quickly look at the main difference between a seek operation and a scan operation.

A seek is an operation which navigates down the index’s b-tree looking for a row or for the start/end of a range of rows. A seek requires a predicate and that predicate must be of the form that can be used as a search argument (SARGable)

A scan is a read of part or all of the leaf level of an index.

High-density indexes

So what exactly is the problem with a high density index? In short, it returns a lot of rows for any predicate filters against it (unless there’s a TOP involved, but let’s ignore those cases here). If the index has a high density (and lets assume for simplicity there’s no data skew here), any predicate using that index automatically has a poor selectivity, it returns a large portion of the table.

If we take as an example a 100 000 row table, with an column called status that has 4 values only, then, assuming that the distribution of those values is equal, a query with a predicate searching for one of those values will read 25000 rows. If we have a nonclustered index on that integer column, it works out that the nonclustered index has 223 pages at the leaf level and is 2 levels deep in total. Given that the four values have equal distribution, an index seek to retrieve the rows for one of those status values will require approximately 57 pages to be read.

Is the index scan better?

The scan will read all the leaf pages, that’s what a scan does (ignoring cases like min, max and top where it can scan and read only part of the index). So if SQL decided to use an index scan because of the high density of the index it will have to read all 100 000 rows on all 223 pages (plus the index root page)

57 pages for the index seek vs 224 pages for the index scan. Looks pretty obvious which is better. To prove that I’m not making things up, let me test this and get actual numbers.

First the setup:

CREATE TABLE TestingIndexSeeks (
   Status INT,
   Filler CHAR(795) DEFAULT ''
);

INSERT INTO TestingIndexSeeks (Status)
SELECT NTILE(4) OVER (ORDER BY (SELECT 1)) AS Status FROM (
    SELECT TOP (100000) 1 AS Number FROM sys.columns a CROSS JOIN sys.columns b
) sub

CREATE NONCLUSTERED INDEX idx_Testing_Status ON dbo.TestingIndexSeeks (Status)

GO

Then the  test:

SELECT status FROM dbo.TestingIndexSeeks WITH (FORCESEEK) WHERE Status = 3

SELECT status FROM dbo.TestingIndexSeeks WITH (FORCESCAN) WHERE Status = 3

Statistics IO for the two queries:

Seek
Table ‘TestingIndexSeeks’. Scan count 1, logical reads 59, physical reads 0.

Scan
Table ‘TestingIndexSeeks’. Scan count 1, logical reads 225, physical reads 0.

Yup, index seek is better and the one that the optimiser choses if it is allowed to chose.

IndexSeek

High density indexes and the clustered index

So why the confusion around index scans on high density indexes? I suspect it’s because of the way the optimiser handles noncovering indexes where the predicates are not selective. This has nothing to do with the efficiency of the seek or scan operators on the nonclustered index though, it’s got to do with the mechanism used for the key lookup.

If a nonclustered index that SQL could use for a query is not covering, then for each row in that resultset it has to do a lookup back to the cluster/heap for the rest of the columns. Those key (or RID) lookups are expensive operations. If there are too many needed then the optimiser switches to a scan, not of the nonclustered index (it would be pointless, it’s still not covering), but of the clustered index because that at least has all the columns needed for the query (it could also switch to a scan of a different nonclustered index if there is one that’s covering but with columns in the wrong order to be seekable)

Summary

In summary, does having a high density nonclustered index result in index scans of that index? No (unless the predicate is not SARGable), however it can result in scans of a different index (probably the cluster) if the index is not covering for a query and that high density index being unused.

Repairing a replication subscriber

It’s a question which has come up a couple of times. If a subscriber of a transactional replication publication becomes corrupt, is running CheckDB with repair allow data loss safe?

The theory is, since the subscriber is a copy of another database, allowing CheckDB to discard data in the process of repairing won’t actually cause data loss (the data is still there in the publisher).

As with many things, it’s partially true, however there’s a little more to it that just that.

To see why, let’s set up some replication and intentionally corrupt some data in the subscriber and run a repair.

First, the source DB. I’m going to keep it really simple so that we can see the effects. A database with a single table called Customers, populated with 100 000 rows via SQLDataGenerator. The filler column simulates other columns that would be in a real table.

CREATE TABLE Customers (
CustomerID INT IDENTITY PRIMARY KEY,
FirstName VARCHAR(50),
Surname VARCHAR(50),
AccountNumber CHAR(15) UNIQUE,
AccountStatus VARCHAR(25),
Filler CHAR(200) DEFAULT ''
)

Once populated, we set up a transactional replication publication with default settings, and for simplicity have the subscriber be a second database on the same instance.  we query the table on the subscriber, we have exactly the same number of rows as the publisher has.

ReplicationSubscriber

To simulate a misbehaving IO subsystem, I’m going to take the subscription DB offline, open the mdf in a hex editor, scribble on top of a few pages in the user table, then bring the DB back online. A checkDB returns a couple of pages of errors ending with:

CHECKDB found 0 allocation errors and 16 consistency errors in table ‘Customers’ (object ID 421576540).

CHECKDB found 0 allocation errors and 18 consistency errors in database ‘ReplicationSubscriber’.

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (ReplicationSubscriber).

Minimum level to repair is REPAIR_ALLOW_DATA_LOSS and, based on the error messages, doing so will deallocate five pages. Let’s run the repair and then see what’s left in the table.

ReplicationSubscriber After Repair

We’ve lost 155 rows from the Customers table. The replication publisher still has them, but the subscriber now does not. So what happens if someone makes some data modifications on the publisher and changes one of the rows that was lost at the subscriber?

UPDATE dbo.Customers
SET AccountStatus = 'Closed'
WHERE CustomerID = 48700;
GO

INSERT INTO dbo.Customers
(FirstName,
Surname,
AccountNumber,
AccountStatus
)
VALUES  ('Mary',
'White',
'4985563214AN'
'Pending'
)

Give it a few seconds to replicate and then let’s check the table in the publisher. The new customer isn’t there… If we go and have a look at the replication, it’s throwing errors.

BrokenReplication

“The row was not found at the Subscriber when applying the replicated command. “, which is quite true, the row’s not there any longer.

What’s happened is that one of the rows updated on the publisher was lost when CheckDB repaired the database. The rows lost due to the repair weren’t automatically fetched from the publisher, there’s no mechanism in transactional replication for that to happen. Hence when the row was updated on the publisher, the update statement was replicated and the row on the subscriber couldn’t be found, causing the replication to fail, retry and fail again. It will keep failing until it is either reinitialised or the missing row is manually added to the subscriber.

What we needed to have done, to prevent this outcome, was after the CheckDB deallocated pages, to manually (or with something like SQLDataCompare) to sync the subscriber with the publisher and explicitly add back the rows which were on the deallocated pages. The primary key values must be kept the same, as that’s what replication uses to identify which rows in the subscriber need to be updated.

So to go back to the original question, yes it’s probably fine to run CheckDB with repair on a replication subscription, providing that the data deallocated as part of the repair is manually synced from the publisher afterwards. If it isn’t, the replication will break as soon as one of the rows affected by the repair is updated or deleted at the publisher.

Unusual errors with Distributed Replay

I’ve come to really like Distributed Replay in the last couple of years. I’ve used it to do a scale test, I’ve used it to test a workload for performance regressions before upgrading. It has a lot of possibilities.

One problem with it is there’s no GUI, so configuring it requires a fair bit of time spent playing around on the command line or in the depths of Component Services.

Another problem is that there aren’t a lot of people using it, so there isn’t a lot of detail on what to do if something goes wrong.

Make that ‘when something goes wrong’, DReplay is a little finicky.

The more common errors are firewall and COM related and they appear in several blog posts, a search for the error code usually turns up a resolution. However there was an error which I ran into recently which turned up exactly 0 results in google. So, to fix that problem, here’s a description of the error, the circumstances and what turned out to be the cause of the error.

To start, the scenario. The preprocess of the trace files had been done, the firewall configured, the COM setting changes made. The services were running, no errors showing in the logs. I’d used DReplay on the machine previously with the same processed trace file and it had worked fine. This time, however…

DReplay

“Error DReplay   Failed to get client information from controller.”

After turning up nothing in google, I spent half the afternoon checking logs, restarting the services, restarting the computer, checking and rechecking the firewall and the COM settings. I finally went and checked the details of the controller and client services.

DReplayController.png

Anyone spotted the problem yet? For those who haven’t, let me highlight pieces of those last two screenshots.

DReplay_Small

DReplayController_Small

Distributed Replay is not instanced. If there are two versions of SQL Server installed on the machine, and this laptop has SQL 2012 and SQL 2014, and the replay controller and client were installed with both, then the service points to the executables from the most recent installation. The older version’s executables are still there however, and they still execute. They throw errors, but the errors do not, in any way, indicate that there’s a version problem.

The above error is what the Replay option of DReplay returns. If the preprocess is run from the incorrect directory, the error returned is “Error DReplay   Object reference not set to an instance of an object.”

DReplayPreprocess

The fix is as simple as changing to the correct directory and running the correct version of DReplay, the one that matches the version which the services point to.

Q&A from the DBA Fundamentals Virtual Chapter

A couple of weeks ago I did a presentation to the DBA Fundamentals virtual chapter. The presentation title was “What execution plans can tell you about query performance”

The slides and recording are available at the Virtual Chapter’s home page

I didn’t manage to get all of the questions answered, so here are a couple of slightly more involved questions which didn’t get answered.

Does the order of table matter when doing an inner join?

Short answer: No.

Long answer: Maybe, but it shouldn’t.

The optimiser decides which table is joined in which order. Putting a table first in the join clause does not mean it will be the first one processed. In general (as in, in ~99% of cases), put the tables in the join clause in the order which makes logical sense for the query.

Changing table order can, in some cases, change the plan. This doesn’t mean that SQL uses the order which the tables are specified in to determine the plan, it just means that changing the query resulted in the optimiser searching through the plan search space in a different way and finding a different ‘good enough’ plan. It’s not going to be deterministic and hence shouldn’t be relied on.

Will moving a filter from the WHERE to the INNER JOIN improve performance?

No, but again it can change the plan generated as described above. Personally I prefer joins in the JOIN clause and filters in the WHERE clause, because that’s what’s normal and expected.

Please note that moving filters from/to the WHERE clause from an OUTER JOIN changes the logic of the query and likely the results.

If multiple users are running the same query with different parameter values, will it result in different plans or recompiles?

Neither.

There will be one plan in cache (unless the SET options differ, but let’s ignore that for now). No matter what the parameter values are, when the same query is run, the plan will be fetched from cache and used.

Does index fragmentation have an effect on the join type chosen?

The Query Optimiser has no idea what logical fragmentation is. It doesn’t base its choices on how the pages are laid out in the data file. Logical fragmentation affects large range scans from disk, that’s all. If the pages are in memory, then fragmentation has no further effect.

How not to do a Masters degree

There are many, many guides to successfully completing a post grad degree, so I am not going to add to them. Instead, this is a list of things, based on personal experience, that you can do to make it somewhere between very difficult and impossible to complete that Masters or Doctorate degree.

To be clear, I’m talking about the degrees which are mostly, if not entirely, based on research and a dissertation, not coursework.

Do the degree while holding down a full-time job

Initially this looks like it’ll work out fine. Work Monday to Friday, work on the degree on Saturday and Sunday. For the first few months it does work fine.

But there’s one thing that a dissertation requires and that’s a large amount of dedicated time. Time to read the literature. Time to come up with the hypothesis or research questions. Time to design an experiment. Time to conduct that experiment. Time to revise the experiment, conduct it again, revise again, conduct again… Time to analyse the results. Time to write up the results. Time to edit, rewrite, edit, rewrite, edit…

As the months pass, and as the initial enthusiasm and fun fades, so it becomes harder and harder to spend most of the weekend on the research, weekend after weekend after weekend, for a couple of years

Sure, it’s possible to complete a Masters degree while working full time, but it’s like playing a game on Insane difficulty level.

Decide that you want to get heavily involved in the SQL community

The SQL community are a great bunch of people and there’s a lot of encouragement to jump in and get involved, for many good reasons

Posting on forums is great in many ways, it boosts your confidence, it gets you recognitions and it’s a good way to get to know features you’ll never use in your regular job.

Blogging and writing are a great way to share knowledge, and there’s a thrill from watching the page hit count go up, from the first comment, from the complements, especially when your blog post gets referenced by others as the authoritative article on a subject.

But it takes time. Lots of time. Articles can take days of work, blog posts can take anything from a few minutes to many hours depending on the subject and the depth of the post. Presenting takes lots and lots of prep time. An hour-long presentation can easily require a day of prep, and that’s once you’ve done several presentations. The first one can take many days of writing slides, rewriting slides, writing demos and rehearsing the presentation several times. Forums will take every minute you’re willing to give to them and more, and there’s the constant temptation of ‘just one more post…’

And where’s that time going to come from? The time that would otherwise have been spent on the research and dissertation.

Get burned out, and don’t seek help

I’m probably going to get flak for this, but it has to be said.

As an industry in general, we are too reluctant to ask for help. I don’t mean technical help (though that too in many cases).

We are too eager to put on a pedestal the person who works 12 hours a day, 6 days a week, the person who pulls all-nighters on a regular basis, the person who never takes vacation because he ‘doesn’t have time’. We’re conditioned to see extreme hours worked as a sign of dedication, of what is needed to get ahead.

It’s not heroic. It’s not required. It’s not something to be admired.

It’s stupid.

Long days are sometimes required, weekend work is sometimes necessary, but they both should be the exception, not the norm. Excessive overtime, if needed to meet deadlines, should be followed with a discussion on what went wrong such that the overtime was required. Was the estimation inadequate? Was the project analysis flawed? Did the scope creep (or gallop)? Were people working on multiple projects at the same time while the project plan expected them to be dedicated? Were assumptions not met or essential infrastructure delayed?

If overtime is frequently required, then management has failed at their job. A developer working 12 hours a day, 6 days a week for months on end is not a sign of dedication or heroics, it’s a sign that someone, somewhere in management is not doing their job properly.

This kind of workload and times are not sustainable. They lead to mistakes and buggy code, they lead to burnout and they lead to people, top people, walking away and never coming back.

I wrote about burnout a few years ago. Looking back now, after another recent brush with it, I don’t think I went far enough in what I said there.

It is not a flaw, it is not a weakness, it is not admitting failure, to seek professional help for burnout. It’s far better to do so than to suffer for years with the effects.

And to get this back on topic, trying to work through burnout is like trying to swim through syrup. Lots of effort, little progress, very easy to give up and stop trying. That’s not going to do wonders for that research and the couple-hundred page dissertation that needs to be done.


Despite all of that, in case anyone is wondering, graduation is mid-September

2011 Book review

Another year over and much as I did last year, I’m going to briefly go over the books I read this last year.

I will freely admit, very few of these could be considered ‘classic literature’, most is a mix of sci-fi, fantasy or adventure fiction. That’s just what I like to read.

Book total this year was 53, up from the 45 I managed in 2010 and above the 50 that I aimed for. Part of this is that I travelled more (and hence had time with nothing to do but read), part is because I took a couple of small vacations (and spent time reading) and part is due to getting an iPad and loading a couple of book apps on there.

The iPad is never going to replace real, physical, paper books for me. I love the smell of new books, the feel of the book (and you can’t read an iPad in the bath without significant risk). That said, it is convenient when travelling and for carrying a few books easily. It’s especially nice when getting books from Amazon. 6 week shipping vs immediate delivery. No contest there.

My choice for best books of the year:

  1. First Lord’s Fury (Codex Alera) by Jim Butcher. This is the climax of the Alera series and definitely the best of the bunch. Fast moving, tense, full of action and altogether an excellent ending for an excellent series. One thing I really like about this one: It doesn’t end with ‘happily ever after’.
  2. Elantris by Brandon Sanderson. This is a bit of a surprise. I got this on sale without too much in the way of expectations. Not to say I don’t like Brandon Sanderson, I’ve enjoyed everything of his that I’ve read, but this was his first published book and so I was willing to give it a little leeway. Not necessary. Good characters (though I’m sure I recognise that headstrong princess from a few places), good plot without too many holes and an intriguing mystery that all comes together logically in a way that leaves you saying ‘But, of course that’s the problem’. Definitely recommend and looking forward to more of his work.
  3. Star Trek: Destiny (trilogy). Yes, I’m recommending Star Trek novels. The world has not ended. I find most Star Trek novels are quickly churned out, mediocre novels. Average writing, average plotting (at best) and usually a reset button to return the universe to the way it was at the end. This trilogy is none of those. The plot works, it’s intertwined over three books and about four time-periods and the crew of at least four ships, and it works. It also leaves the universe dramatically changed (in a way that I did not foresee coming). Finally it’s one of the few time travel tales I’ve read that doesn’t leave me cringing.

Sooo… books per month.

BookList

You can almost see from that which months I was travelling or on holiday. June – trip to UK and a few days at leisure. Oct – trip to Pass and lots of time to read while travelling. Nov – Week away in the middle of nowhere.

Lastly, books per genre. Yes, I read a lot of fantasy. (note, these links go to the library pages on this blog, there’s a link to the Amazon page from there)

Science Fiction

  1. The Long Night of Centauri Prime (Babylon 5: Legions of Fire, Book 1) by Peter David
  2. The Light of Other Days by Arthur C. Clarke
  3. Star Trek: Destiny #3: Lost Souls by David Mack
  4. Star Trek: Destiny #2: Mere Mortals by David Mack
  5. A Confederation of Valor (omnibus) by Tanya Huff
  6. Star Trek: Destiny #1: Gods of Night by David Mack
  7. Earthfall (Homecoming) by Orson Scott Card
  8. Deathstalker by Simon R. Green
  9. Childhood’s End by Arthur C. Clarke
  10. Songs of Distant Earth by Arthur C. Clarke
  11. The Time Machine (SF Masterworks) by H. G. Wells
  12. The Call of Earth by Orson Scott Card

Fantasy

  1. The Phoenix Transformed (The Enduring Flame) by Mercedes Lackey, James Mallory
  2. Nightingale’s Lament (Nightside, Book 3) by Simon R. Green
  3. Elantris by Brandon Sanderson
  4. The Gunslinger (The Dark Tower) by Stephen King
  5. Agents of Light and Darkness (Nightside, Book 2) by Simon R. Green
  6. The Dragon Token (Dragon Star, Book 2) by Melanie Rawn
  7. Something from the Nightside (Nightside, Book 1) by Simon R. Green
  8. A Calculus of Angels (The Age of Unreason, Book 2) by J. Gregory Keyes
  9. Stronghold (Dragon Star, Book 1) by Melanie Rawn
  10. The Crystal City (The Tales of Alvin Maker, Book 6) by Orson Scott Card
  11. Guards of Haven: The Adventures of Hawk and Fisher by Simon R. Green
  12. The Last Olympian (Percy Jackson and the Olympians, Book 5) by Rick Riordan
  13. The Battle of the Labyrinth (Percy Jackson and the Olympians, Book 4) by Rick Riordan
  14. The Titan’s Curse (Percy Jackson and the Olympians, Book 3) by Rick Riordan
  15. The Sea Of Monsters (Percy Jackson and the Olympians, Book 2) by Rick Riordan
  16. The Lightning Thief (Percy Jackson and the Olympians, Book 1) by Rick Riordan
  17. Heartfire (The Tales of Alvin Maker, Book 5) by Orson Scott Card
  18. Alvin Journeyman (Tales of Alvin Maker, Book 4) by Orson Scott Card
  19. Prentice Alvin (The Tales of Alvin Maker, Book 3) by Orson Scott Card
  20. Red Prophet (Tales of Alvin Maker, Book 2) by Orson Scott Card
  21. Seventh Son (Tales of Alvin Maker, Book 1) by Orson Scott Card
  22. Rides a Dread Legion: Book One of the Demonwar Saga by Raymond E. Feist
  23. First Lord’s Fury (Codex Alera) by Jim Butcher
  24. Issola (Vlad Taltos) by Steven Brust
  25. Turn Coat (The Dresden Files, Book 11) by Jim Butcher
  26. Hawk by Simon R. Green
  27. Taliesin by Stephen R. Lawhead

Other Fiction

  1. Robert Ludlum’s The Lazarus Vendetta: A Covert-One Novel by Robert Ludlum, Patrick Larkin
  2. Robert Ludlum’s The Altman Code: A Covert-One Novel by Robert Ludlum, Gayle Lynds
  3. The Bourne Identity: A Novel by Robert Ludlum
  4. The Eyre Affair: A Thursday Next Novel (Thursday Next Novels (Penguin Books)) by Jasper Fforde
  5. Lords of the Bow by Conn Iggulden
  6. Robert Ludlum’s The Cassandra Compact: A Covert-One Novel by Robert Ludlum, Philip Shelby
  7. Robert Ludlum’s The Hades Factor: A Covert-One Novel by Robert Ludlum

Non-fiction

  1. Three Roads to Quantum Gravity by Lee Smolin
  2. Expert SQL Server 2005 Development by Adam Machanic, Hugo Kornelis, Lara Rubbelke
  3. Notes From a Small Island by Bill Bryson
  4. 19 Deadly Sins of Software Security: Programming Flaws and How to Fix Them (Security One-off) by Michael Howard, David LeBlanc, John Viega
  5. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition) by Frederick P. Brooks
  6. On the Shores of the Unknown: A Short History of the Universe by Joseph Silk
  7. Warrior Soul: The Memoir of a Navy Seal by Chuck Pfarrer

SQL University: Advanced Indexing – Indexing Strategies

Right, I know it’s Friday and everyone’s tired and looking forward to the weekend, but I do need to finish off this indexing section and I’ll try to keep this short and interesting and hopefully keep everyone awake.

There’s no shortage of information available on how to create indexes. Hell, I’ve written a copious amount myself. Most of these many articles however are written from the point of indexing single queries. What you chose for a where clause, what has to go into the include to create the perfect index for this query. Now that’s all well and good, but I’ve never met a system that had only one query per table (maybe there is such a system out there, but I’ve never found it)

So what I’m going to try to do today is address the topic of a strategy for indexing. How to approach indexing, not for a single query, but for the system as a whole. I won’t be able to cover this in-depth, this is material worthy of an entire book chapter, if not an entire book, but I can at least touch on the essential portions.

Now, there’s two main positions that we could be in when considering indexing strategies for an entire system
1) A brand new system that’s still in development
2) An existing system that’s being used actively.

One at a time…

Indexing strategies for a brand new system

Start by choosing a good clustered index. What makes a good clustered index? Well, it depends :-)

The clustered index is the base, it will affect each and every nonclustered index, and it’s not trivial to change once the system is in use, so chose carefully. I’m not saying another word on the subject of a clustered index, not today.

Once that’s done…

Continue reading ‘SQL University: Advanced Indexing – Indexing Strategies’