Monday, February 28, 2011

SQLAnywhere Forum Is Here! (in Beta)

SQLA is a question-and-answer website devoted to SQL Anywhere and other products that come in the box with SQL Anywhere (MobiLink, UltraLite, etc.) Version 1 of SQLA has been around for a while, and now Version 2 is available for Beta testing: It's a whole new website, with a new name, new URL, and a familiar-but-slightly-different look and feel:

SQLAnywhere-Forum.sybase.com
Here's the official announcement:
SQL Anywhere Web Forum Now in Beta!
Here's another announcement; this one includes instructions for reclaiming your user id on the new site:
The new SQL Anywhere Forum Q&A site (aka SQLA 2.0) BETA now available

Danger Will Robinson!

All updates made on the BETA site will be thrown away prior to the new site going live.

Friday, February 25, 2011

The fRiDaY File

This was first published on March 24, 2006:

11.0.1.2558 EBF Is Available

The build 2558 EBF of SQL Anywhere 11.0.1 for 32- and 64-bit Windows is now available at downloads.sybase.com...

Wednesday, February 23, 2011

Automatic Restart

"Self-management" is very important in SQL Anywhere. It central to the Number One Hallmark of SQL Anywhere:

Embeddability: SQL Anywhere can be easily embedded inside other applications. It combines high performance with very small memory footprint. SQL Anywhere contains a range of features to enable self-management and maintenance in frontline environments, including features that enable optimization of computer resources, self-tuning for improved performance, and simplification of remote installation and support.
So why does SQL Anywhere grind to a halt and wait for a response when it throws an assertion? In a typical embedded application, there's no operator at the console, no DBA watching for something to go wrong.

In most shops, even those with DBAs, the action most often taken after an assertion is to restart the server. Nine times out of ten, perhaps 99 times out of 100, that's the right thing to do: Get the server up and running and responding to requests. Have a look at the assertion later.

In many shops, restarting the server is an absolute requirement: downtime is worse than damaged data. It should be possible to specify "restart immediately after assertion", perhaps not as the default behavior but definitely available, not like the current requirement to click, wait, fumble and hesitate through those mystifying dialog boxes.

"Why not use a Mirror Server?"


Because High Availability isn't appropriate for some small-footprint embedded applications.

Because High Availability isn't appropriate for some high-throughput applications which process gigabytes of data on an hourly basis.

Because High Availabliity doesn't get the failed partner server up and running after an assertion.

"It's too dangerous!"


Here... let me introduce you to Mr. CEO... you can tell him how that errant data page is critically important, how it's necessary to keep his applicaton offline while you figure out what went wrong... he'll give you a new definition of "dangerous".

What's worse than an assertion?


Well, if your SQL Anywhere engine has just thrown an assertion and ground to a halt, it might be hard to imagine something worse than an assertion.

But there is: It's when your SQL Anywhere server goes completely unresponsive, grinding to a halt without throwing any error message.

So there's another feature SQL Anywhere needs: "restart when server goes catatonic".

If "restart" is too hard, how about simply "stop after assertion" and "stop when unresponsive"? Then the dbspawn / dbsrv12 commands can be wrapped inside a command file GOTO loop that immediately starts the server after it stops.

Monday, February 21, 2011

Choosing An Embedded Database Upgrade Path

An embedded database is one which is tightly integrated with a software application, with new versions of the database often being delivered on the same release cycle as the application. Unlike centralized corporate applications with databases managed by database administrators at head office, applications with embedded databases are often delivered to multiple external sites and are run without any assistance from head office.

The "without any assistance" doesn't just apply to the initial installation and regular operations, it also applies the process of upgrading an application and its embedded database from one version to another. With a corporate database back at head office, changes can be tested, and applied step-by-step under watchful eyes, and even backed out if something goes wrong. Out in the field, however, it's all got to be automated, often as part of an InstallShield setup.exe, and it all has to work no matter what... nobody at head office has the time or inclination to deal with a thousand different installations.

So, let's consider an existing application with a thousand different remote sites running various versions 1 through n, with each version having different schema in its embedded database.

Now, version n + 1 is being constructed with more schema changes...

How should the automated database upgrade be designed?


Door Number 1: Ship code that dynamically alters the schema and data in-place, in the existing database, from whatever old version is currently being used to the new version n + 1 schema.

Door Number 2: Leave the existing database alone, install a new empty database with the new schema, and dynamically alter the data as it is being copied from the old database so it fits the schema in the new database.

Door Number 1: The Alter Schema Approach


One advantage of this approach is that it's often faster to execute; for example, if a large table has the same schema in both the old and new versions then nothing needs to be done. In the other approach, all the data must be copied all the time, even where the schema hasn't changed.

Another advantage is the database space has already been allocated, which is also a disadvantage if the alter-in-place process causes rows to split or other bad things to happen.

Both approaches require at least n blocks of code to handle all the possible transformations. With the Alter Schema Approach the schema and data are most likely transformed from version 1 to 2, 2 to 3, ... n to n + 1. With each new version, a new block must be added but the others probably remain unchanged. The disadvantage here is that passing the data through n transformations may introduce huge inefficiencies resulting in a lot of funky custom code being written for performance.

Door Number 2: The Copy Data Approach


One advantage of this approach is that it's often easier to code an export-transform-load process than an alter-in-place statement, especially if there are large differences between the old and new schema.

Backing out the upgrade is also easier with this approach: just throw away the new database and go back to using the old one, it hasn't been touched.

The Copy Data Approach uses n blocks of code to transform the data from version 1 to n + 1, 2 to n + 1, ... n to n + 1. With each new version, each block of code must be examined and possibly changed, and a new block added. The disadvantage is all the extra work over time, checking and modifying all the transformations since the target has changed for each one. The advantages are that the resulting code tends to be brain-dead simple, and testing is easier because the data only goes through one transformation (and the schema, none at all). Plus, it can be argued that (in the real world) all of the transformation logic in the Alter Schema Approach must be checked and possibly modified with each new version because of those creeping inefficiencies.

The biggest advantage of this approach, however, is that the schema used by the application is exactly the one that was tested and delivered, not some altered version of an earlier schema. For example, ALTER TABLE ADD column statements put the new columns at the end of the table, which isn't a good idea for short, heavily updated columns if earlier columns contain long strings.

That advantage extends into testing: If the target schema remains unchanged regardless of the old version, only the data upgrade process needs to be tested against each old version, not the rest of the application.

On the other hand, if the target schema can be one of n altered versions plus version n + 1 for a fresh installation, the application must be tested against databases that were upgraded from each old version.

Still, It's Hard To Choose


Door Number 2, the Copy Data Approach, was chosen for Foxhound for this main reason: the new schema is delivered and installed as-is, just like the application code; it's isn't patched or altered.

So far it's worked out well. Foxhound has reached n = 7 and while there have been some performance problems the code is simple and virtually bug free... nothing like the wailing and gnashing of teeth I've heard over the years from clients who use the Alter Schema Approach.

Maybe that's the real reason: copying data is easier.

Friday, February 18, 2011

The fRiDaY File

"No! No! I'm not anything like Dilbert!" ...

Coolest New Feature In 12.0.1? [fixed]

The code in Coolest New Feature In 12.0.1? was almost correct, except for the (re)discovery of a new entry to add to the list in How many times have you done this?:

  1. SQL Anywhere: Coding a SET or other executable statement ahead of a DECLARE CURSOR.
Here is the tested code for the sp_forward_to_remote_server system procedure which lets you send a native query to a remote database and receive a result set in return without having to define proxy tables and write the query in SQL Anywhere syntax; first, two tables defined on "the other database" called ddd2:

CREATE TABLE t1 (
pkey INTEGER NOT NULL PRIMARY KEY,
data1 INTEGER NOT NULL,
data2 INTEGER NOT NULL );

INSERT t1 VALUES ( 1, 2, 2 );
INSERT t1 VALUES ( 2, 2, 2 );
COMMIT;

CREATE TABLE t2 (
pkey INTEGER NOT NULL PRIMARY KEY,
data3 INTEGER NOT NULL );

INSERT t2 VALUES ( 1, 98 );
INSERT t2 VALUES ( 2, 99 );
COMMIT;

SELECT * FROM t1 ORDER BY t1.pkey;

SELECT * FROM t2 ORDER BY t2.pkey;

pkey,data1,data2
1,2,2
2,2,2

pkey,data3
1,98
2,99

Here's a call to sp_forward_to_remote_server from the ddd1 database:

CREATE SERVER ddd2_server CLASS 'SAODBC'
USING 'DRIVER=SQL Anywhere 11;ENG=ddd;DBN=ddd2';

BEGIN

DECLARE @pkey INTEGER;
DECLARE @data3 INTEGER;
DECLARE @SQLSTATE VARCHAR ( 5 );

DECLARE @select_statement LONG VARCHAR;

DECLARE c_fetch INSENSITIVE CURSOR FOR
CALL sp_forward_to_remote_server ( 'ddd2_server', @select_statement );

SET @select_statement = '
SELECT t1.pkey,
t2.data3
FROM t1 INNER JOIN t2 ON t1.pkey = t2.pkey
ORDER BY t1.pkey';

OPEN c_fetch WITH HOLD;

FETCH c_fetch INTO
@pkey,
@data3;

SET @SQLSTATE = SQLSTATE;

WHILE @SQLSTATE = '00000' LOOP

MESSAGE STRING ( 't1.pkey = ', @pkey, ', t2.data3 = ', @data3 ) TO CLIENT;

FETCH c_fetch INTO
@pkey,
@data3;

SET @SQLSTATE = SQLSTATE;

END LOOP;

CLOSE c_fetch;

END;

t1.pkey = 1, t2.data3 = 98
t1.pkey = 2, t2.data3 = 99

To quote the earlier article:
What's cool about that? Well, the SQL code on database ddd1 runs "SELECT FROM t1 INNER JOIN t2" and receives a result set without having to define any proxy tables, and without having to worry about whether "FROM proxy_t1 INNER JOIN proxy_t2" will run quickly (over on the other database) or slowly (because all the rows are brought across and the join is done on ddd1).

No more FORWARD TO 'CREATE VIEW ...' just so your SELECT FROM proxy_view will run quickly, now you can just run the SELECT as-is and not worry about the middleware.

Wednesday, February 16, 2011

Loading 32-bit Versus 64-bit DLLs [revisited]

In Monday's Loading 32-bit Versus 64-bit DLLs I wrote about how it took 5 years before I finally got around to creating a 64-bit version of a 32-bit C DLL. According to this advice, way back then, it was a requirement for running on the 64-bit version of SQL Anywhere:


Newsgroups: sybase.public.sqlanywhere.general
From: John Smirnios
Date: 11 Jul 2005 21:27:08 -0700
Subject: Re: External procedures with 64-bit engine

64-bit processes cannot load 32-bit DLLs and vice-versa. If you are using a 64-bit
server, you must build a 64-bit DLL to contain your external stored procedures.

Beware Ancient Advice!


As Volker Barth pointed out in his comments on Monday's post, matching bitness between server and DLL is no longer required...

Breck, when using external C/C++ environments, 64-bit servers can run 32-bit DLLs
and vice versa. - The external functions just have to be declared for the according
32/64-bit external envirnoment.

Regards
Volker

That being said, you could just restrict to use exactly *one* type of DLL and would
not have to check which one is available...

So the real answer might be:

You don't have to know.

Volker
If I'd read further, I might have noticed this more recent newsgroup posting:

Newsgroups: sybase.public.sqlanywhere.general
From: "Karim Khamis [Sybase iAnywhere]"
Date: 11 Jan 2010 06:20:01 -0800
Subject: Re: External procedures in x64

If you can do it, upgrade to SA 11.0.1. The external environments in SA 11.0.0
and up support the existing C external function api and you can mix and match
32 bit dlls with 64 bit servers AND vice versa. Changing your existing external
function declarations to external environment declarations is trivial since you
only need to add one of the C_ODBC32, C_ODBC64, C_ESQL32 or C_ESQL64 language
types. Once you do that, your dlls will load in an external environment and will
get invoked in exactly the same manner as they currently do within the server.
One note though, because the external environments are in a separate process,
external environment calls are much slower than in process external function
calls. As a result, if performance is the overriding factor, then your best
course of action is to see if you can recompile your dll for 64-bit.
What Karim and Volker are talking about is adding the LANGUAGE C_ESQL32 clause to the CREATE PROCEDURE statement. For example, this code works on both 32-bit and 64-bit SQL Anywhere 11 servers:

BEGIN

DECLARE @command_line VARCHAR ( 32766 );
DECLARE @return_code INTEGER;
DECLARE @diagnostic_code INTEGER;
DECLARE @diagnostic_string VARCHAR ( 255 );

BEGIN
DROP PROCEDURE rroad_get_command_line;
EXCEPTION WHEN OTHERS THEN
END;

CREATE PROCEDURE rroad_get_command_line (
OUT command_line VARCHAR ( 32766 ),
OUT return_code INTEGER,
OUT diagnostic_code INTEGER,
OUT diagnostic_string VARCHAR ( 255 ) )
EXTERNAL NAME 'get_command_line@C:\\projects\\foxhound\\rroad1.dll' LANGUAGE C_ESQL32;

CALL rroad_get_command_line (
@command_line,
@return_code,
@diagnostic_code,
@diagnostic_string );

SELECT @command_line;

END;

There is a wrinkle to the LANGUAGE C_ESQL32 clause, however: the code inside the DLL is running inside a different process and as such it has a different view of the outside world from code running inside the server itself.

For example, the get_command_line procedure in the code above calls the GetCommandLine Function in the Windows API to retrieve the command-line string for the current process. With Monday's version of the code, using separate 32-bit and 64-bit DLLs that run inside the server (no LANGUAGE C_ESQL32 clause), the command line that launched SQL Anywhere is returned:

SELECT @command_line;

@command_line
'"C:\\Program Files\\SQL Anywhere 12\\bin32\\dbeng12.exe" -o dbeng12_log_ddd12.txt -oe dbeng12_log_fatal_ddd12.txt -os 10M -x tcpip -zl -zp -zt ddd12.db "-hn0,11148:216"'

@command_line
'"C:\\Program Files\\SQL Anywhere 12\\bin64\\dbeng12.exe" -o dbeng12_log_ddd12.txt -oe dbeng12_log_fatal_ddd12.txt -os 10M -x tcpip -zl -zp -zt ddd12.db "-hn0,10816:196"'

When the LANGUAGE C_ESQL32 clause is used, something completely different (and unexpected) is returned:

SELECT @command_line;

@command_line
'"C:\\Program Files\\SQL Anywhere 12\\bin32\\..\\bin32\\dbexternc12.exe" ESQL "ddd12" "ddd12" "EXTENV_MAIN" "527365143:131947461:3::2011-02-14 06:04:20.394" '

@command_line
'"C:\\Program Files\\SQL Anywhere 12\\bin64\\..\\bin32\\dbexternc12.exe" ESQL "ddd12" "ddd12" "EXTENV_MAIN" "41488089:1092220081:4::2011-02-14 06:01:52.254" '

So, alas, Foxhound's going to stick with the kludgy code from Monday's post.

I'll say this, however... if I'd known the LANGUAGE C_ESQL32 clause made it possible to call a 32-bit DLL from a 64-bit server with no further changes, I might have looked for a way to make it work... or given up on calling GetCommandLine.

Tuesday, February 15, 2011

12.0.1 Update and 12.0.1.2636 EBF Are Available

Happy Valentine's Day! The free update to SQL Anywhere 12.0.1 is now available for download, and so is the Express Bug Fix to 12.0.0.2636:

Monday, February 14, 2011

Loading 32-bit Versus 64-bit DLLs

It's only been five years since someone asked for it... I'm finally getting around to making Foxhound run on a 64-bit SQL Anywhere engine. The bottleneck's always been the teeny tiny custom DLL that Foxhound uses to read the Windows registry and a few other tasks. The 64-bit engine can only use a 64-bit DLL so two versions (32-bit and 64-bit) of the DLL are now required.

So why the delay creating a 64-bit DLL? Let's just say wrestling with C code is not my favorite task, and I assumed migrating to 64-bits was going to be a big job.

Ha ha! Not so big! Microsoft Visual Studio 2008 lets you create a second 64-bit DLL with about ten mouseclicks in the GUI... six clicks if you get them right the first time. Plus, zero C code changes... even the C headers delivered with SQL Anywhere work with both 32-bit and 64-bit code.

But... the SQL code's another matter. The CREATE PROCEDURE ... EXTERNAL NAME clause has to specify which DLL is to be used, and Foxhound doesn't know which SQL Anywhere engine is being used until the database starts up. In fact, the user can switch back and forth, using different SQL Anywhere engines on the same database.

Question: How do I determine which DLL to load at runtime?

Answer: You can't, not from SQL code.

Other Answer: Try calling each version of the DLL, see which one works.

Here's the test code that was used to verify the Other Answer:


BEGIN

DECLARE @bits VARCHAR ( 10 );
DECLARE @command_line VARCHAR ( 32766 );
DECLARE @return_code INTEGER;
DECLARE @diagnostic_code INTEGER;
DECLARE @diagnostic_string VARCHAR ( 255 );

SET @bits = 'unknown'; -- until proven otherwise

-- Try the 32-bit dll.

BEGIN
DROP PROCEDURE rroad_get_command_line;
EXCEPTION WHEN OTHERS THEN
END;

CREATE PROCEDURE rroad_get_command_line (
OUT command_line VARCHAR ( 32766 ),
OUT return_code INTEGER,
OUT diagnostic_code INTEGER,
OUT diagnostic_string VARCHAR ( 255 ) )
EXTERNAL NAME 'get_command_line@C:\\projects\\foxhound\\rroad1.dll';

BEGIN

CALL rroad_get_command_line (
@command_line,
@return_code,
@diagnostic_code,
@diagnostic_string );

SET @bits = '32-bit'; -- call worked

EXCEPTION WHEN OTHERS THEN -- ignore exception

END;

IF @bits = 'unknown' THEN

-- Try the 64-bit dll.

BEGIN
DROP PROCEDURE rroad_get_command_line;
EXCEPTION WHEN OTHERS THEN
END;

CREATE PROCEDURE rroad_get_command_line (
OUT command_line VARCHAR ( 32766 ),
OUT return_code INTEGER,
OUT diagnostic_code INTEGER,
OUT diagnostic_string VARCHAR ( 255 ) )
EXTERNAL NAME 'get_command_line@C:\\projects\\foxhound\\rroad164.dll';

BEGIN

CALL rroad_get_command_line (
@command_line,
@return_code,
@diagnostic_code,
@diagnostic_string );

SET @bits = '64-bit'; -- call worked

EXCEPTION WHEN OTHERS THEN -- ignore exception

END;

END IF;

SELECT @bits;

END;

Sunday, February 13, 2011

SQL Anywhere 12.0.1 Is Available [update]

As noted yesterday full product releases (with license keys) of SQL Anywhere 12.0.1 for Windows and Linux are now available to subscribers at the Sybase Product Download Center https://sybase.subscribenet.com/.

What wasn't mentioned was that the setup.exe will also do an upgrade from 12.0.0 to 12.0.1.3152 as well as a full install, so you don't have to uninstall 12.0.0 first.

Here are three Windows downloads available on the Product Download Center:

  • SQL Anywhere 12.0.1 Windows Documentation - don't bother, this ain't the Help :)

  • CD - SQL Anywhere 12.0.1 Windows 32/64-bit - this is what you want: SQL Anywhere, Sybase Central and dbisql... but not the Help; that you have to download separately, and the setup GUI gives you a link to it.

  • CD - SQL Anywhere 12.0.1 Development and Design Tools Windows 32/64-bit - this is InfoMaker 12.0 and PowerDesigner 15.2 Physical Model.


As of this writing nothing related to 12.0.1 has shown up yet at http://downloads.sybase.com.

Saturday, February 12, 2011

SQL Anywhere 12.0.1 Is Available

If you are a subscriber, full product releases (with license keys) of SQL Anywhere 12.0.1 for Windows and Linux are now available at the Sybase Product Download Center https://sybase.subscribenet.com/.



However, as of this writing upgrades from 12.0.0 to 12.0.1 haven't showed up yet at http://downloads.sybase.com.

Friday, February 11, 2011

The fRiDaY File

Here's the rule: When you see a bug on a website, the polite thing to do is report it directly to the webmaster.

But what if the Feedback page has a bug?



How do you know the report got through? You don't, so you use the next best thing, today's fRiDaY File...

"Yoo Hoo! Meester Web Master! ServletException! ..."

Techwave 2011 (2)

Details keep trickling out...


Save the date for Sybase TechWave 2011 – September 12-16 in Las Vegas, Nevada at The Venetian Resort Hotel.

TechWave 2011 will include all the performance and tuning tips and techniques, best practices, and hands-on training on core Sybase products that you have come to expect – plus, we’re combining forces with SAP to provide expanded content and networking opportunities, which will include a joint TechWave/TechEd keynote, special event and Demo Jam participation.

Be sure to bookmark the TechWave website, where all the latest information will be posted. And follow @Sybase on Twitter for conference updates, tagged #TechWave.

Tell us which Sybase product you are most interested in learning more about and be entered to win a FREE registration to attend the conference (valued at $1200)!

We hope you will be able to join us!

Sincerely,
The Sybase TechWave Team

Wednesday, February 9, 2011

Coolest New Feature In 12.0.1?

Here's a candidate for the Coolest New Feature in the next version of SQL Anywhere: The sp_forward_to_remote_server system procedure. (No, 12.0.1 isn't out yet, but the Help is.)

Here's how I think sp_forward_to_remote_server works: Like the wonderful FORWARD TO statement, you can send a raw other-dialect SQL statement over to a different database (say, Oracle or Microsoft SQL Server) and have that statement executed on the other server as-is, without having to futz around with proxy tables or proxy procedures or remote procedure calls.

Unlike the FORWARD TO statement, however, you will be able to receive a result set back from the other server. And, again unlike FORWARD TO, you won't have to wrap your code with EXECUTE IMMEDIATE to get it to run inside a stored procedure.

Previously, getting a result set back from FORWARD TO was an unattainable dream unless you were willing to run it manually in dbisql. Now, you can do it with sp_forward_to_remote_server, with just a teeny little wrinkle: you have to use a cursor, you can't call sp_forward_to_remote_server from the FROM clause.

And (I think... I'm just guessing) it has to be an old-school cursor... you know, DECLARE, OPEN, FETCH, check SQLSTATE, LOOP, CLOSE, all that crap... none of that fancy newfangled ease-of-use stuff with the FOR statement.

To repeat: I'm guessing about not using FOR... remember, 12.0.1 isn't available yet, just the Help.

But, let's keep the guesswork going; here are two tables defined on "the other database" called ddd2:


CREATE TABLE t1 (
pkey INTEGER NOT NULL PRIMARY KEY,
data1 INTEGER NOT NULL,
data2 INTEGER NOT NULL );

INSERT t1 VALUES ( 1, 2, 2 );
INSERT t1 VALUES ( 2, 2, 2 );
COMMIT;

CREATE TABLE t2 (
pkey INTEGER NOT NULL PRIMARY KEY,
data3 INTEGER NOT NULL );

INSERT t2 VALUES ( 1, 98 );
INSERT t2 VALUES ( 2, 99 );
COMMIT;

SELECT * FROM t1 ORDER BY t1.pkey;

SELECT * FROM t2 ORDER BY t2.pkey;

pkey,data1,data2
1,2,2
2,2,2

pkey,data3
1,98
2,99

Here's a call to sp_forward_to_remote_server from the ddd1 database:

CREATE SERVER ddd2_server CLASS 'SAODBC'
USING 'DRIVER=SQL Anywhere 11;ENG=ddd;DBN=ddd2';

BEGIN

DECLARE @pkey INTEGER;
DECLARE @data3 INTEGER;
DECLARE @SQLSTATE VARCHAR ( 5 );

DECLARE @select_statement LONG VARCHAR;

SET @select_statement = '
SELECT t1.pkey,
t2.data3
FROM t1 INNER JOIN t2 ON t1.pkey = t2.pkey
ORDER BY t1.pkey';

DECLARE c_fetch INSENSITIVE CURSOR FOR
CALL sp_forward_to_remote_server ( 'ddd2_server', @select_statement );

OPEN c_fetch WITH HOLD;

FETCH c_fetch INTO
@pkey,
@data3;

SET @SQLSTATE = SQLSTATE;

WHILE @SQLSTATE = '00000' LOOP

MESSAGE STRING ( 't1.pkey = ', @pkey, ', t2.data3 = ', @data3 ) TO CLIENT;

FETCH c_fetch INTO
@pkey,
@data3;

SET @SQLSTATE = SQLSTATE;

END LOOP;

CLOSE c_fetch;

END;

t1.pkey = 1, t2.data3 = 98
t1.pkey = 2, t2.data3 = 99

What's cool about that? Well, the SQL code on database ddd1 runs "SELECT FROM t1 INNER JOIN t2" and receives a result set without having to define any proxy tables, and without having to worry about whether "FROM proxy_t1 INNER JOIN proxy_t2" will run quickly (over on the other database) or slowly (because all the rows are brought across and the join is done on ddd1).

No more FORWARD TO 'CREATE VIEW ...' just so your SELECT FROM proxy_view will run quickly, now you can just run the SELECT as-is and not worry about the middleware.

Monday, February 7, 2011

What's New in Foxhound 1.1

Version 1.1 of Foxhound for SQL Anywhere has just been released; here's what's new...

SQL Anywhere 12 Support - Foxhound now supports SQL Anywhere Version 12 target databases, in addition to versions 5.5, 6, 7, 8, 9, 10 and 11.

Better Performance, Lower Overhead - The Foxhound Monitor now runs faster and uses fewer resources.

  • This screenshot shows an idle target database with 1,005 connections that aren't doing anything. Prior to being shut down at 9.08 AM, the previous version of Foxhound was taking over 2 minutes to record each sample, and the target database server was using over 2% CPU to satisfy Foxhound's requests for information about those connections.



    By 9:17 AM the new version of Foxhound was up and running. Now Foxhound was able to record a sample in 5 seconds or less, bringing the interval between samples closer to the standard 10 seconds. Also, the CPU load on the target database is lower.
100 Targets - You can now monitor up to 100 different target databases using a single copy of the Extended edition of Foxhound.Faster Upgrade - Foxhound now takes less time and uses less disk space when re-installing or upgrading.

Faster, Better Purge - The Foxhound purge process has been completely rewritten to be faster and more effective at dealing with giant Foxhound databases.
  • The new Purge uninteresting connection data setting specifies how soon uninteresting connection-level Monitor sample data will be deleted.

  • The new Purge speed setting specifies how aggressively the purge process will pursue its goal of deleting old data.

  • The new Purge Run report shows how well the purge settings are working.


  • Also, the "after 1 day / week / ..." calculations are based on the most recent successful sample, not the current timestamp, so you don't lose the most recent significant data just because sampling has been turned off for a while.
Better Diagnostics - It's now easier to diagnose problems which occur when starting and running Foxhound.

Ad-hoc Queries - The Export Samples feature has been replaced with the ability to perform ad-hoc queries against the Foxhound database via dbisql and other client software.

Better Docs - The Frequently Asked Questions now have different "alias" questions pointing to the same answers, to make the answers easier to find.

See also...

Friday, February 4, 2011

The fRiDaY File

This was first published on March 23, 2006:

Capturing the Server Console Log (2)

The previous article introduced the sa_server_messages() procedure and how it can be used to query the server console log messages produced by the SQL Anywhere database server.

This article talks about how to capture those messages in a permanent database table, on the fly, while the server is running.

Here's the table:


CREATE TABLE server_message (
message_number UNSIGNED BIGINT NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY CLUSTERED,
msg_id UNSIGNED BIGINT NOT NULL,
msg_text LONG VARCHAR,
msg_time TIMESTAMP,
msg_severity VARCHAR ( 255 ),
msg_category VARCHAR ( 255 ),
msg_database VARCHAR ( 255 ) );

Here is a SQL Anywhere event that automatically fires every second to do the work of copying all the new rows returned by sa_server_messages() into the server_message table:

CREATE EVENT capture_server_messages_every_second SCHEDULE
START TIME '00:00:00'
EVERY 1 SECONDS
HANDLER BEGIN

DECLARE @next_msg_id UNSIGNED BIGINT;
DECLARE @sqlcode INTEGER;
DECLARE @sqlstate VARCHAR ( 5 );
DECLARE @errormsg VARCHAR ( 32767 );

------------------------------------------------------------
-- Get the starting point msg_id, which may be zero.

SELECT next_msg_id.next_msg_id
INTO @next_msg_id
FROM next_msg_id;

------------------------------------------------------------
-- Get all the new messages, if any.

INSERT server_message (
msg_id,
msg_text,
msg_time,
msg_severity,
msg_category,
msg_database )
SELECT sa_server_messages.msg_id,
sa_server_messages.msg_text,
sa_server_messages.msg_time,
sa_server_messages.msg_severity,
sa_server_messages.msg_category,
sa_server_messages.msg_database
FROM sa_server_messages ( first_msg = @next_msg_id )
ORDER BY sa_server_messages.msg_id;

------------------------------------------------------------
-- Update the starting point for next time.

-- Note: The previous INSERT may not have inserted anything,
-- so @@IDENTITY cannot be used.

SELECT TOP 1
server_message.msg_id + 1
INTO @next_msg_id
FROM server_message
ORDER BY server_message.message_number DESC;

UPDATE next_msg_id
SET next_msg_id.next_msg_id = @next_msg_id
WHERE next_msg_id.one_row_only = 1;

COMMIT;

END;

Here's a query that displays the table:

SELECT message_number,
msg_id,
msg_time,
msg_severity,
msg_category,
msg_database,
msg_text
FROM server_message
ORDER BY message_number ASC;

Here's what the output looks like for messages captured just before and after the server was stopped and started:



Note that the server_message.message_number column starts at 1 and increases by 1 for all rows, whereas the msg_id value returned by sa_server_messages() starts over at 0 each time the server is started.

Astute readers will notice something's missing... here is the table that stores the starting point specified in "FROM sa_server_messages ( first_msg = @next_msg_id )" clause:

CREATE TABLE next_msg_id (
one_row_only INTEGER NOT NULL DEFAULT 1 CHECK ( one_row_only = 1 ) PRIMARY KEY,
next_msg_id UNSIGNED BIGINT NOT NULL DEFAULT 0 );

INSERT next_msg_id VALUES ( DEFAULT, DEFAULT );
COMMIT;

CREATE TRIGGER next_msg_id
BEFORE DELETE ON next_msg_id
FOR EACH ROW
BEGIN
ROLLBACK TRIGGER WITH RAISERROR 99999 'Do not attempt to delete from next_msg_id';
END;

CREATE EVENT database_start TYPE DatabaseStart
HANDLER BEGIN

TRUNCATE TABLE next_msg_id; -- does not fire BEFORE DELETE trigger
INSERT next_msg_id VALUES ( DEFAULT, DEFAULT );
COMMIT;

END;

The CHECK condition makes sure there is never more than one row in next_msg_id, the INSERT puts one row in that table, the trigger makes sure that row is never deleted, and the database_start event resets the next_msg_id table every time the server is restarted.

Wednesday, February 2, 2011

Capturing the Server Console Log

"Server Console" is a fancy term for the SQL Anywhere database server window, and "Server Console Log" refers to the messages from the server that appear in that window:

Actually, it's probably more accurate to define "Server Console" as the SQL Anywhere Console utility, and the "Server Console Log" as the Messages pane displayed by that utility:
"%SQLANY12%\bin32\dbconsole.exe"^
-c "ENG=ddd12;DBN=ddd12;UID=dba;PWD=sql"

But that's not important... what matters is that the messages come from the database server, and what's really important is to...

...Always Capture The Server Console Log In Production!


The easiest way to do that is to specify the dbsrv12 -o filespec.txt command line option to save the console log to a text file:
"%SQLANY12%\bin32\dbsrv12.exe"^
-o dbsrv12_log_ddd12.txt^
-oe dbsrv12_log_fatal_ddd12.txt^
-os 10M^
ddd12.db
While you're at it, specify the -os 10M option to rename and restart the filespec.txt file when it grows large; -os 1M and 10M are reasonable limits, beyond which the file may get unwieldy when you need to browse it.
Tip: If you're starting the server as a service, you will probably need to include the full drive and path in the -o filespec.txt option.
The -oe different_filespec.txt option tells SQL Anywhere to put Really Important Messages in a separate file: startup and fatal errors, assertions and Catch 22 messages. Here's an example of a Catch 22 message; the server can't start because it can't open the -o filespec.txt file because it's already open because the server is already running:
01/30 05:12:07. Can't open Message window log file: dbsrv12_log_ddd12.txt
Why is it important to capture the server console log? Because if you don't, and the server displays a Really Important Message and then crashes, the message is gone gone gone, never to be seen again.

Another Way


There's another way to look at the server console log besides dbconsole and the -o file: the sa_server_messages() procedure. This is not a substitute for always capturing the log via -o filespec.txt, it is an alternative that lets you write SQL SELECT statements to query the messages:
MESSAGE 'Hello, World!' TO CONSOLE;

SELECT *
FROM sa_server_messages()
ORDER BY sa_server_messages.msg_id;

(Why isn't sa_server_messages() a substitute for -o filespec.txt? Because sa_server_messages() only works while the server is running, so if the server crashes you still need the -o file. Plus, sa_server_messages() only shows you messages produced since the server started... and if the server has been running for a long time, it will only show you recent messages. But besides all that, it's still pretty cool :)
Here's a simple query that shows all the checkpoints:
SELECT sa_server_messages.msg_text, 
sa_server_messages.msg_time
FROM sa_server_messages()
WHERE sa_server_messages.msg_category = 'CHKPT'
ORDER BY sa_server_messages.msg_id;

Here's a not-so-simple query that calculates how long each checkpoint took (surely, someone can write a better query, i.e., simpler, maybe with an OLAP WINDOW)...

BEGIN

DECLARE LOCAL TEMPORARY TABLE checkpoint_record (
checkpoint_starting TIMESTAMP NOT NULL PRIMARY KEY,
checkpoint_finished TIMESTAMP )
NOT TRANSACTIONAL;

INSERT checkpoint_record ( checkpoint_starting )
SELECT sa_server_messages.msg_time
FROM sa_server_messages()
WHERE sa_server_messages.msg_category = 'CHKPT'
AND sa_server_messages.msg_text LIKE 'Starting %';

FOR f_fetch AS c_fetch INSENSITIVE CURSOR FOR
SELECT checkpoint_record.checkpoint_starting AS @checkpoint_starting
FROM checkpoint_record
ORDER BY checkpoint_record.checkpoint_starting
FOR READ ONLY
DO

UPDATE checkpoint_record
SET checkpoint_record.checkpoint_finished = sa_server_messages.msg_time
FROM ( SELECT TOP 1 sa_server_messages.msg_time
FROM sa_server_messages()
WHERE sa_server_messages.msg_category = 'CHKPT'
AND sa_server_messages.msg_text LIKE 'Finished %'
AND sa_server_messages.msg_time >= @checkpoint_starting
ORDER BY sa_server_messages.msg_time ) AS sa_server_messages
WHERE checkpoint_record.checkpoint_starting = @checkpoint_starting;

END FOR;

SELECT *,
DATEDIFF ( MILLISECOND,
checkpoint_record.checkpoint_starting,
checkpoint_record.checkpoint_finished ) AS msec
FROM checkpoint_record
ORDER BY checkpoint_record.checkpoint_starting;

END;

checkpoint_starting checkpoint_finished msec
2011-01-30 05:11:32.000 2011-01-30 05:11:32.281 281
2011-01-30 05:31:33.453 2011-01-30 05:31:33.937 484
2011-01-30 05:51:35.046 2011-01-30 05:51:35.515 469
2011-01-30 06:11:36.640 2011-01-30 06:11:37.078 438
2011-01-30 06:31:38.234 2011-01-30 06:31:38.781 547
2011-01-30 06:51:39.937 2011-01-30 06:51:41.125 1188
Besides being easily searched and having extra columns like msg_category, the sa_server_messages() result set offers this distinct advantage: the timestamps are more accurate; e.g., 2011-01-30 05:17:57.578 instead of 01/30 05:17:57.