Saturday, October 30, 2010

Turning Dark Clouds into Silver Linings - the slides

If you couldn't attend the conference to hear Glenn Paulley speak about data management in the cloud, you can read what he had to say here.

Of particular interest to me was this point from slide 6:

Self-management is largely all about performance
  • A significant exception: error handling
If you don't think error handling is a big deal, have a look at this this conversation about a bug a feature a bug an interesting idiosyncracy in SQL Anywhere.

Friday, October 29, 2010

The Seven Deadly Habits of an Oracle DBA

Seriously, it doesn't say "Oracle" in the title of this article, and the points it makes apply to a lot of non-Oracle shops including some using SQL Anywhere:

The Seven Deadly Habits of a DBA
But... the writer IS talking about Oracle, and Habit #5 does apply to every single large Oracle (and IBM) shop I've ever dealt with.


Habit #5. THE BLAME GAME: "Don't look at me, it's the developer's fault that SQL is in production"

Some DBAs have a real "us versus them" mentality when it comes to developers in their organization. They see themselves not as facilitators helping the developers develop quality code from a database standpoint, but rather as guardians who prevent poor-quality code from making it into production. This might seem like semantics, but a confrontational relationship between developers and DBAs results in a lack of developer initiative and significant slowdowns in release cycles.

Cures:
  • Select DBAs who understand it's their responsibility to work as an integrated team with the developers they support.

  • Cultivate a team attitude by structuring continuous DBA involvement in every project rather than at review milestones.

  • Consider assigning an individual DBA in a developer support role. If it's clearly in the job description, there's more motivation to do it well.


The "Blame Game" title is lame, the important point is the us-versus-them mentality that destroys creativity and productivity.

Wednesday, October 27, 2010

Getting a BIGINT from DATEDIFF

Did you know that DATEDIFF returns a signed INTEGER value? Not BIGINT?


SELECT EXPRTYPE ( 'SELECT DATEDIFF ( DAY, CURRENT TIMESTAMP, CURRENT TIMESTAMP + 1 )', 1 ) AS "Data Type";

Data Type
'int'

Who cares, you ask?


Well, did you know that DATEDIFF ( MILLISECOND, ... ) craps out at 25 days?

More specifically, at some point between 24 and 25 for the number of days returned by DATEDIFF ( DAY, ... ), the same call using DATEDIFF ( MILLISECOND, ... ) will blow past the limit for INTEGER.

BEGIN
DECLARE @sqlcode INTEGER;
DECLARE @sqlstate VARCHAR ( 5 );
DECLARE @errormsg VARCHAR ( 32767 );

DECLARE @ok BIGINT;
DECLARE @splat BIGINT;

SELECT DATEDIFF ( MILLISECOND,
CURRENT TIMESTAMP,
DATEADD ( DAY, 24, CURRENT TIMESTAMP ) )
INTO @ok;

BEGIN

SELECT DATEDIFF ( MILLISECOND,
CURRENT TIMESTAMP,
DATEADD ( DAY, 25, CURRENT TIMESTAMP ) )
INTO @splat;

EXCEPTION WHEN OTHERS THEN
SELECT SQLCODE, SQLSTATE, ERRORMSG()
INTO @sqlcode, @sqlstate, @errormsg;
MESSAGE STRING (
'EXCEPTION raised by "SELECT INTO @splat" at ',
CURRENT TIMESTAMP,
': SQLCODE = ', @sqlcode,
', SQLSTATE = ', @sqlstate,
', ERRORMSG() = ', @errormsg )
TO CLIENT;

END;

SELECT @ok, @splat;

END;

@ok,@splat
2073600000,(NULL)

EXCEPTION raised by "SELECT INTO @splat" at 2010-10-16 08:52:49.610: SQLCODE = -158, SQLSTATE = 22003, ERRORMSG() = Value datediff(millisecond,2010-10-16 08:52:49.610,2010-11-10 08:52:49.610) out of range for destination
You can probably figure out the exact "splat!" point between 24 and 25 days using DATEADD ( HOUR, ... ) or MINUTE or even SECOND. Note, however, that DATEDIFF ( SECOND, ... ) and MINUTE have their own splat! points, and the effect of multiple splat! points on code verbosity will soon become apparent.

Who cares? I do!


Or at least, the Foxhound Database Monitor cares... Foxhound lives and dies on the calculation of elapsed times between two arbitrary timestamps. Tiny values need to be reasonably precise, so the calculations are done in milliseconds. Huge intervals (days, weeks, years) must to be accomodated so BIGINT is used. And to reduce code complexity BIGINT milliseconds are used throughout (there's a bunch of code devoted to formatting intervals for display, and sticking to milliseconds for input to that code makes it easier).
Sounds like a job for FLOAT? Hah! ...don't talk to me about floating point numbers, they're icky sloppy things, not to be touched or handled without gloves.

Yes, I am a Data Type Bigot and proud of it. In olden days my motto was, "If it ain't greater than zero and less than 32767 I'm not interested!"

Now it's "Give me fixed point or give me death!"

Besides, DATEDIFF returns an INTEGER, and it *still* craps out at 25 days if you CAST it as FLOAT.
One workaround is to accept FLOAT-like behavior in a BIGINT value (large values are not perfectly precise), and turn failing DATEDIFF ( MILLISECOND, ... ) calls into ones that work: DATEDIFF ( SECOND, ... ) * 1000, DATEDIFF ( MINUTE, ... ) * 60 * 1000 and so on.

Here's a warts-and-all excerpt from Foxhound; the columns to look at are started_at, completed_at and run_msec:

CREATE TABLE rroad_purge_run (
run_number BIGINT NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY, -- do not INSERT or UPDATE this column
progress VARCHAR ( 100 ) NOT NULL DEFAULT 'Starting',
started_at TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP, -- do not INSERT or UPDATE this column
is_complete VARCHAR ( 1 ) NOT NULL DEFAULT 'N' CHECK ( @complete IN ( 'Y', 'N' ) ),
completed_at TIMESTAMP NOT NULL DEFAULT TIMESTAMP, -- do not INSERT or UPDATE this column

must_trigger_next_purge VARCHAR ( 1 ) NOT NULL DEFAULT 'N' CHECK ( must_trigger_next_purge IN ( 'Y', 'N' ) ),
sample_purge_interval VARCHAR ( 100 ) NOT NULL DEFAULT '',
uninteresting_connections_purge_interval VARCHAR ( 100 ) NOT NULL DEFAULT '',
purge_speed VARCHAR ( 100 ) NOT NULL DEFAULT '',
exception_delete_count BIGINT NOT NULL DEFAULT 0,
orphan_sample_set_delete_count BIGINT NOT NULL DEFAULT 0,
old_sample_set_delete_count BIGINT NOT NULL DEFAULT 0,
uninteresting_connections_delete_count BIGINT NOT NULL DEFAULT 0,
exception_delete_msec INTEGER NOT NULL DEFAULT 0,
orphan_sample_set_delete_msec INTEGER NOT NULL DEFAULT 0,
old_sample_set_delete_msec INTEGER NOT NULL DEFAULT 0,
uninteresting_connections_delete_msec INTEGER NOT NULL DEFAULT 0,

run_msec BIGINT NOT NULL COMPUTE (
CASE
WHEN ABS ( DATEDIFF ( YEAR, started_at, completed_at ) ) >= 4083
THEN CAST ( DATEDIFF ( HOUR, started_at, completed_at ) AS BIGINT ) * 60 * 60 * 1000

WHEN ABS ( DATEDIFF ( YEAR, started_at, completed_at ) ) >= 68
THEN CAST ( DATEDIFF ( MINUTE, started_at, completed_at ) AS BIGINT ) * 60 * 1000

WHEN ABS ( DATEDIFF ( DAY, started_at, completed_at ) ) >= 24
THEN CAST ( DATEDIFF ( SECOND, started_at, completed_at ) AS BIGINT ) * 1000

ELSE DATEDIFF ( MILLISECOND, started_at, completed_at )
END ) );
The table rroad_purge_run is used for monitoring Foxhound itself, in particular the internal database purge process.

The started_at column is initialized by DEFAULT CURRENT TIMESTAMP when the row is inserted, and completed_at is updated by the DEFAULT TIMESTAMP clause every time the row is updated. The completed_at column doesn't contain a real "completed at" value until the purge process is finished... unless the process crashes along the way, in which case it's pretty close.

The run_msec COMPUTE clause shows how various DATEDIFF splat! points are avoided by making some conservative tests: If the difference is 4083 years or more use DATEDIFF ( HOUR, ... ), else if the difference is 68 years or more use MINUTE, else so on.

Here is a simulation showing how the COMPUTE works for small intervals. After the first UPDATE run_msec is 593 and after the second run_msec = 1140:

INSERT rroad_purge_run ( run_number ) VALUES ( DEFAULT ); -- work begins

WAITFOR DELAY '00:00:00.5'; -- some work is done

UPDATE rroad_purge_run
SET exception_delete_count = 123 -- a column is changed
WHERE run_number = 1;

SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;

started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-10-16 08:11:38.503',593

WAITFOR DELAY '00:00:00.5'; -- more work is done

UPDATE rroad_purge_run
SET orphan_sample_set_delete_count = 567 -- another column is changed
WHERE run_number = 1;

SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;

started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-10-16 08:11:39.050',1140
Here's another simulation showing how the COMPUTE works for 23 versus 25 days; there's no EXCEPTION, but there is also a slight loss of precision (the 11 milliseconds is lost: 1987200011 versus 2160000000):

UPDATE rroad_purge_run
SET completed_at = DATEADD ( DAY, 23, DATEADD ( MILLISECOND, 11, started_at ) )
WHERE run_number = 1;

SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;

started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-11-08 08:11:37.921',1987200011

UPDATE rroad_purge_run
SET completed_at = DATEADD ( DAY, 25, DATEADD ( MILLISECOND, 11, started_at ) )
WHERE run_number = 1;

SELECT started_at, completed_at, run_msec
FROM rroad_purge_run;

started_at,completed_at,run_msec
'2010-10-16 08:11:37.910','2010-11-10 08:11:37.921',2160000000

Monday, October 25, 2010

Sybase goes crazy hiring

Well, maybe not all of Sybase, but surely one part is hiring like the recession's over. Do this Google search, see how many hits you get:

jobs "Sybase Federal, an SAP GSS Company"
A little bird told me that the successful 2010 US Census project has been a real door-opener for SQL Anywhere and MobiLink inside the US federal government.

Saturday, October 23, 2010

Cisco appreciates SQL Anywhere

For those of you who don't read SQLA, here's an excerpt from a recent "Sybase Partner News" email (the emphasis is mine):

Sybase Wins Cisco's Annual Supplier Appreciation Award for Software Excellence

September 30, 2010 - presented during its annual Supplier Appreciation Conference, this is the first year Cisco recognized software suppliers in addition to several categories for hardware suppliers. Cisco currently embeds SQL Anywhere in nearly 20 products, and cited Sybase iAnywhere's efforts to ensure Cisco's success with the technology. During the presentation, Cisco praised Sybase iAnywhere's unique approach, and the technical and relationship support provided by the Sybase team.

Friday, October 22, 2010

Well, don't leave us hanging... is this a bug or not?

Back on October 9 I posted "What's going on here?" about the suspicious behavior of a simple SELECT with and without a WHERE clause.

The first line said "Can something this simple really be so wrong?" and the last line said "This is a bug in SQL Anywhere 11.0.1.2276, right?"

Anonymous said...
Well, don't leave us hanging... is this a bug or not?

October 14, 2010 10:53 AM
[redacted] said...
I say it's a bug. The documentation pretty clearly states that the ORDER BY clause will be respected (and is in fact more or less required) for TOP and START AT.

Is there an index on the "data" column?

October 14, 2010 8:55 PM
Well, [redacted] is not alone. I thought it was a bug, too, until someone pointed out to me...

That's how TOP works!

TOP is evaluated after the WHERE clause has done it's thing. Long after. I should know that, I used up ten pages in my book talking about the "Logical Execution of a SELECT". That section listed 15 separate steps, with the WHERE clause coming in at Step Number 3 and the TOP clause way down at Step Number 12.

Oh, and ORDER BY is back at Step Number 9.

In my defense, the actual SELECT I was working with was way more complex than the one described in "What's going on here?"... but I still wasted several hours over two days.

Still don't get it?

Don't feel bad, you haven't spent two days thinking about it like I did.

And not everyone can be Mark C. or V Barth or Phil, who all got it right, right away...
Mark C. said...
Breck: If there are five rows in the "top 10" of original query where data != 'B' then these five rows would not be included in the second query's result set and this would cause the output of the second query to be different.

October 9, 2010 11:21 AM
V Barth said...
Oh, a riddle?

I suggest there are 5 rows in ther pkey range 1-9 WHERE data = 'B' IS NOT TRUE. Therefore the rows 10-14 are still part of the 2nd select but get skipped by the START AT clause as they are now among the first 9 rows.

October 10, 2010 11:27 AM
Phil said...
What's the data for pkey 1 through 9? I'm not an expert, but if there are 5 rows with data != 'B', then I would expect this result. I would read this query as "give me rows 10 through 15 that meet these conditions" not "give me the rows that meet these conditions from rows 10 through 15."

October 11, 2010 11:45 AM
If you got fooled into thinking it was a bug in SQL Anywhere, it might be because rows 1 through 9 were left out of "What's going on here?". I did that on purpose so you could be misled just like I was.

Here's the full demo...

CREATE TABLE t (
pkey INTEGER NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
data VARCHAR ( 10 ) NOT NULL );

INSERT t VALUES ( 1, 'A' );
INSERT t VALUES ( 2, 'A' );
INSERT t VALUES ( 3, 'A' );
INSERT t VALUES ( 4, 'A' );
INSERT t VALUES ( 5, 'A' );
INSERT t VALUES ( 6, 'B' );
INSERT t VALUES ( 7, 'B' );
INSERT t VALUES ( 8, 'B' );
INSERT t VALUES ( 9, 'B' );
INSERT t VALUES ( 10, 'B' );
INSERT t VALUES ( 11, 'B' );
INSERT t VALUES ( 12, 'B' );
INSERT t VALUES ( 13, 'B' );
INSERT t VALUES ( 14, 'B' );
INSERT t VALUES ( 15, 'B' );
INSERT t VALUES ( 16, 'B' );
INSERT t VALUES ( 17, 'B' );
INSERT t VALUES ( 18, 'B' );
INSERT t VALUES ( 19, 'B' );
INSERT t VALUES ( 20, 'B' );
COMMIT;

SELECT TOP 5 START AT 10 *
FROM t
ORDER BY pkey;

pkey,data
10,'B'
11,'B'
12,'B'
13,'B'
14,'B'

SELECT TOP 5 START AT 10 *
FROM t
WHERE data = 'B'
ORDER BY pkey;

pkey,data
15,'B'
16,'B'
17,'B'
18,'B'
19,'B'

My new excuse? There were 22 million rows in the actual table. That's my story, and I'm sticking to it.

Wednesday, October 20, 2010

It's good to be INSENSITIVE

Here's something you may have heard from time to time:

Don't use cursors to make changes. Use set-oriented SQL statements instead.
But hey, sometimes you HAVE to write a COBOL-style program, one that uses an old-fashioned input-process-repeat loop to manipulate a result set one record, er, row at a time.

Sure, you can spend a couple of days figuring out how WINDOW works with UPDATE, or whether MERGE will work for your convoluted requirements... or you can spend a couple of hours writing a fetch loop.

If you do use a cursor, here's another slogan (this is the short form, the long form comes later):
Always make your cursors INSENSITIVE.
An insensitive cursor is a safe cursor, all other forms are spooky scary. If you like spooky scary, then fine, go ahead. But be prepared for never knowing if your code's going to work in all scenarios... well, you can never know that anyway, but with other kinds of cursors you can be pretty sure you WILL have problems, eventually.

Why is it good to be insensitive? Because an insensitive cursor is a stable cursor, it doesn't matter what goes on around it (in other connections, even the same connection), the result set is fixed when the loop starts. With other types of cursors the rules are fantastically complicated... and the rules change from release to release.

OK, you've got questions, "Always make your cursors INSENSITIVE" is really simplistic. Maybe the long form will help:
If you are going to INSERT, UPDATE or DELETE any of the tables involved in a cursor definition, either directly (your code inside the fetch loop) or indirectly (say, when your connection fires a trigger that makes such a change, or when some other connection, even an EVENT that your code fires, makes such a change), then
  • always specify both INSENSITIVE and FOR READ ONLY, and

  • never use WHERE CURRENT OF, always use UPDATE and DELETE statements with ordinary WHERE clauses.
If you end up having performance problems, then consider changing INSENSITIVE to something else. Most of the time (80%? 90%? 99%?) you won't have to, and life will be that much safer and easier.
Here's a SQL fetch loop template using the wonderful FOR loop syntax:

FOR [loop name] AS [cursor name] INSENSITIVE CURSOR FOR
SELECT t.primary_key_column_1_of_2 AS @primary_key_column_1_of_2,
t.primary_key_column_2_of_2 AS @primary_key_column_2_of_2,
t.[some other column] AS @[some other column],
t.[some other column] AS @[some other column]
FROM t
WHERE [some predicates]
ORDER BY [some columns]
FOR READ ONLY
DO
...
[references to the @variables implicitly declared in the SELECT]
...
INSERT t ( [some column names] ) VALUES ( [some values] )
...
UPDATE t
SET t.[some other column] = [some value],
t.[some other column] = [some value]
WHERE t.primary_key_column_1_of_2 = @primary_key_column_1_of_2
AND t.primary_key_column_2_of_2 = @primary_key_column_2_of_2;
...
DELETE t
WHERE t.primary_key_column_1_of_2 = @primary_key_column_1_of_2
AND t.primary_key_column_2_of_2 = @primary_key_column_2_of_2;
...
END FOR;


Question: Isn't FOR READ ONLY redundant when you code INSENSITIVE? You can't use WHERE CURRENT OF with an INSENSITIVE cursor, can you?

Answer: Once upon a time (Version 7) INSENSITIVE cursors could be updatable. Besides, the rules for cursors change all the time, who knows what INSENSITIVE with FOR UPDATE might mean in the future... maybe at runtime it will morph into a value-sensitive cursor without telling you.

If you see the following statement in the Help for Version 8 or later, just ignore it, it's wrong (and it's being fixed):
INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.
Let's beat the point to death, er, drive the point home... if you printed out all the Help files, here's how you could fix them:
Version 5 is OK: A cursor declared INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 6 is OK: INSENSITIVE cursors ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 7 is OK: INSENSITIVE cursors ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 8: INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 9: INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 10: INSENSITIVE ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 11: INSENSITIVE clause ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.

Version 12: INSENSITIVE clause ... It does see the effect of PUT, UPDATE WHERE CURRENT, DELETE WHERE CURRENT operations on the same cursor.


More stuff...
Insensitive cursors
FOR statement
DECLARE CURSOR statement
Does anyone else WANT TO KNOW how cursors really work?

Monday, October 18, 2010

Turning Dark Clouds into Silver Linings

Here is the abstract of the keynote speech Glenn Paulley will be giving at the CIKM 2010 - 19th ACM International Conference on Information and Knowledge Management in Toronto, October 26 to 30, 2010:



Turning Dark Clouds into Silver Linings

Data management services are ubiquitous in the industry. With some important exceptions, relational database systems are the platform upon which many applications depend, including mainframe servers, web browsers, and handheld devices. The vast majority of these installations run unattended. Consequently, self-management, self-tuning, and self-healing features are of great importance to these systems. Cloud computing architectures, due to their inherent dynamics, add another level of flexibility - and complexity - to the problems of database self-management.

However, application developers continue to experience unpredictable performance and reliability issues with the application's software stack. Over the past few years, these issues have led to a variety of proposals to address the problem, including weak consistency models and the abandonment of SQL as a data management sub-language.

In this talk I'll present an overview of the data management problems in relational database systems that are exacerbated by cloud computing architectures, discuss the state-of-the-art in self-management technology, and conclude with some ideas for future research to address these problems.

Speaker Information



Glenn Paulley is a Director with Sybase iAnywhere Engineering, where he manages the research and development team responsible for query processing in SQL Anywhere, Sybase's self-managing relational database server. He joined Sybase iAnywhere in 1995. During his 20-year industrial career Glenn has held previous positions at Amdahl Corporation and at a large Canadian insurance company. He holds a Ph.D in Computer Science from the University of Waterloo. His research interests include software usability, query optimization, information systems architecture, design of Management Information Systems, topics in systems analysis, interfaces to database systems, database query languages, user models, multidatabase systems, and indexing techniques.

Thursday, October 14, 2010

Using HTTPS with your SQL Anywhere-based web server

This article is based on this SQLA question-and-answer, How do I set up a TYPE RAW web service to use HTTPS? and on this not-yet-published Foxhound FAQ: How do I specify the HTTPS transport-layer security for Foxhound?



If you have built TYPE RAW web services in SQL Anywhere, or (I'm guessing) TYPE HTML etcetera, you can support HTTPS without making any changes to your SQL code.

You can do this by modifying the SQL Anywhere startup command line to specify RSA encryption and the HTTPS protocol as follows:
  • Obtain an identity certificate and the associated private key for your server.

  • Store the identity certificate file in a known location.

  • Change the -xs option to specify https on the dbsrv*.exe command line used to start your SQL Anywhere server:
    Specify the identity certificate file and private key in the -xs https identity= and identity_password= parameters.

    Note that the default port for HTTPS is 443.
Here is an example of a dbsrv11.exe command line modified to allow only HTTPS access to Foxhound data using the sample certificate "%SQLANY11%\Bin32\rsaserver.id" that comes with SQL Anywhere 11:

(Note: Only the -xs line had to be changed, all the other stuff was there before.)

"%SQLANY11%\Bin32\dbspawn.exe"^
-f^
"%SQLANY11%\Bin32\dbsrv11.exe"^
-c 50p^
-ch 75p^
-cr-^
-gk all^
-gn 120^
-n foxhound1^
-o foxhound1_debug.txt^
-oe foxhound1_debug_startup.txt^
-on 10M^
-qn^
-sb 0^
-x none^
-xd^
-xs https(identity="%SQLANY11%\Bin32\rsaserver.id";identity_password=test;port=443;maxsize=0;to=600;kto=600)^
foxhound1.db^
-n f

If you want to allow both HTTP and HTTPS access, specify both as follows:

-xs http(port=80;maxsize=0;to=600;kto=600),https(identity="%SQLANY11%\Bin32\rsaserver.id";identity_password=test;port=443;maxsize=0;to=600;kto=600)^

To read more about -xs, see Starting the database server with transport-layer security and the -xs dbeng12/dbsrv12 server option in the Help.

How come I don't have to code SECURE ON?


You can add the SECURE ON clause to every single web service if you want...

CREATE SERVICE service_name TYPE 'RAW'
AUTHORIZATION OFF USER user_name SECURE ON
AS CALL procedure_name (
:parameter1,
:parameter2 );

...but you don't have to. If you have complete administrative control over the command line used to start SQL Anywhere, then omitting the SECURE ON clause gives you more flexibility in deciding whether or not, and when, to support HTTP and/or HTTPS.

That's the case with the SQL Anywhere database that is Foxhound: it's up to each customer to decide if they want HTTP (the default) and/or HTTPS, and if they want to lock it down to use only HTTPS they must have control over the command line.

To read more about SECURE ON, see the Help on the CREATE SERVICE statement and Mark Culp's answer to this question in SQLA.

Saturday, October 9, 2010

What's going on here?

Can something this simple really be so wrong? Here's a simple query in SQL Anywhere 11.0.1.2276 that returns 5 rows, starting at row number 10, from a table t where pkey is the PRIMARY KEY column t, and that column is DEFAULT AUTOINCREMENT with values 1, 2, 3...:


SELECT TOP 5 START AT 10 *
FROM t
ORDER BY pkey;

pkey,data
10,'B'
11,'B'
12,'B'
13,'B'
14,'B'

So far so good... now supposing we add WHERE data = 'B' to the query; shouldn't the result set be the same?

But it's not! Look here:

SELECT TOP 5 START AT 10 *
FROM t
WHERE data = 'B'
ORDER BY pkey;

pkey,data
15,'B'
16,'B'
17,'B'
18,'B'
19,'B'

What's going on here? Clearly rows 10 through 14 have data = 'B', why aren't those rows showing up in the second query?

This is a bug in SQL Anywhere 11.0.1.2276, right?

For the answer see Well, don't leave us hanging... is this a bug or not?