Naming conventions are poison. | Facility9

Naming conventions are poison. | Facility9.

In this article Jeremiah Peschka speaks from my heart. One day I hope naming conventions are going to go the way of … well … one of those things that went obsolete because technology moved ahead. (As a shining example, the A20 gate in x86 Processors. This was a hack when first introduced but it was a hack that stuck.) The rest of the computing environment changed so much that the A20 Gate became more trouble than it was worth. That’s how I feel abut naming conventions, causing more trouble than they are worth because the tools do a much better job of telling us what something is than messing with the name ever could.

I can, through my current work, now add some new hated naming conventions to my growing list. This one’s not a full grown hate, more an annoyance, but I don’t like naming an instance according to the version of SQL Server running it. It becomes tedious for everyone come the day that the old version is end-of-lifecycle.

Write Queries Organically | Facility9

The actual title of the piece is:

How I Get By Without sysadmin

But I think the essence is in a subtitle: Write Queries Organically

via How I Get By Without sysadmin | Facility9.

If I had my way, anyone who touches a SQL Server (that I have some responsibility for) would have to sign a declaration that they have read this article and pledge an oath to abide by it.

Similarly, anyone coming to complain about the SQL server being slow would have to provide a written explanation as to why they can’t improve the execution plan of the top three most expensive queries. Anyone caught with a 1,5GB heap table will be made to do the top ten most expensive queries before we even think of checking any performance counters.

Of course, the SQL Servers we’re talking about are never going to be perfect (not for the price the Business Partners are willing and able to pay) so there will often be something the DBA can do to improve performance or something they can suggest to the developer and/or user which would improve performance. However, going straight to the DBA without checking the basics is like calling the electrician when your TV doesn’t work because the batteries in the remote are dead. Unnecessary expense and possibly even ridicule may occur.

Yours, 100% friendlywise, A

Adam Machanic : How Queries Are Processed (A Month of Activity Monitoring, Part 13 of 30)

My metaphor is a diner. The Resource Diner.

via Adam Machanic : How Queries Are Processed (A Month of Activity Monitoring, Part 13 of 30).

I love this visualisation of how delays can happen during the processing of a query.

It reminds me of another metaphor. Earlier in order to have more performance in a car, you would simply add more cylinders to the engine (add more CPUs) or increase the size of each cylinder (increase clock speed) or pump in more fuel add memory.) Nowadays you have fuel injection systems adding the fuel under higher pressure and in more than one squirt, and you have multiple turbo-/superchargers, and modulated ignition sparks, and a whole host of other things. So, you really do have to be a rocket scientist in order to know which dial to turn in which direction and how far in order to improve performance in each and every scenario.

Using Memory for Performance Tuning in SQL Server by Brent Ozar Unlimited®

Everybody’s worried that adding memory is somehow cheating.

Says this article:

via Using Memory for Performance Tuning in SQL Server by Brent Ozar Unlimited®.

I can’t say I’ve ever heard that worry expressed in that way. I hear one group worried about the price of adding memory. I hear one group worrying that adding memory will just delay the onset of the problem and that as the data volume increases, there will come a time when you can’t chuck memory at it anymore and the Godzilla like table/query will eat your performance.

We had a classic example the last few days. Customer screaming for blood because his tool is unusable and everyone passes the buck down the line until it stops where the poor sucker has no one to pass it on to. The louder the screams the more it will spread and the more „the buck stops here“ points/people will get hit with the stuff flying off of the fan.

So, we do traces and Perfmons whilst a precursory look at the database reveals a few heap tables with a Gigabyte of data each that are getting hit pretty often. It doesn’t take much thinking to realise that you can throw memory at the problem to get the heaps into memory or you can get indexes added and if still necessary, add memory. Adding an index is not exactly rocket science but the developer had previously tried to bully the assembled crowd with loud assertations that with all their years of experience they have optimised their indexes until there’s nothing more to be won. At this point a gentle suggestion that a few indexes must have then gone missing scince a 1.5GB indexless heap table would be one of the most obvious signs of poor database design, allowed the poor fellow to save some face and to later admit that some indexes must have gone missing in the porting process from MySQL.

The fact that the porting from MySQL to MSSQL was done using a Microsoft migration tool and that there were ample, but unheeded, warnings to check that all design structures had been ported/migrated can, of course, be put down to a blustering answer of „well we can’t check everything can we?“. The bluster is of course true, I cannot expect them to check that the moon is not made of green cheese or where the first occurence of my telephone number within the decimal expression of PI happens to be, but from my position, I do believe that if someone has a problem with „my“ product then I do think it is my duty to check that I have my house in order before I try to shovel the blame on someone else…

One Common SQL Programming Mistake (Red-Gate)

Predicate evaluation order

If you are familiar with the logical query processing order, then you may expect that a query is executed in the following order:

FROM
WHERE
GROUP BY
HAVING
SELECT

The sequence above outlines the logical order for executing query. Logically the FROM clause is processed first defining the source data set, next the WHERE predicates are applied, followed by GROUP BY, and so on.

However, physically, the query is processed differently and the query optimizer is free to move expressions in the query plan in order to produce the most cost efficient plan for retrieving the data. This leads to a common misunderstanding that a filter in the WHERE clause is applied before the next phases are processed. In fact, a predicate can be applied much later in the physical execution plan. Also, there is no left to right order for execution of predicates. For example, if you have a WHERE clause containing „WHERE x=1 AND y=2“, there is no guarantee that „x=1“ will be evaluated first. They can be executed in any order.

via Ten Common SQL Programming Mistakes.

SCOM of the (Microsoft) World

I was doing some Performance Analysis today (again) and on one Server I found that the CPU load regularly went to 100% and even, at times, stayed there for a while.

I happened to have a Microsoft Premier Field Engineer looking over my shoulder at the time and he said something to the effect „oh, that’s probably the Operations Manager Monitoring“… normally one would assume that monitoring is a quiet little task sitting in the background doing its work unnoticed until there is something to report. However, in this case it soon turned out to be that the „oh“ was justified, the „System Center Operation Manager“ was often top of the CPU% list and often over 50%.

We could start to talk about the various options to restrict SCOM monitoring activity but instead i decided to use a procedure which had cured a problem on another node of the same cluster.

  • Stop the „System Center Management“ service
  • Delete all files in „C:\Program Files\System Center Operations Manager\Agent\Health Service State“
  • Restart the „System Center Management“ service

And it did bring some relief. The CPU Load was very rarely at 100% and the monitoring service was never above 50%. At least not in the few hours I continued to look into it.

Ultimately the SCOM stuff will have to be configured so as not to take so many resources on the monitored machines but for now we have one more bottleneck widened a little.

We can, again, consider whether the SCOM Team/Guy received a warning message that the SCOM Agent had been stopped, as I would expect, but that is a question for another day.