Thursday, January 30, 2014

Latest SQL Anywhere Updates: 16.0.0.1761 for Windows

Current builds for the active platforms...

HP-UX     16.0.0.1691 (EBF 22263 SP6)        11 Dec 2013
 Itanium  12.0.1.3994 (EBF 22137 SP66)       02 Dec 2013
          11.0.1.2958 (EBF 21793 SP96)       08 Apr 2013     End of Life 31 May 2014

IBM AIX   16.0.0.1691 (EBF 22262 SP6)        11 Dec 2013
          12.0.1.3994 (EBF 22136 SP66)       02 Dec 2013
          11.0.1.2958 (EBF 21792 SP96)       08 Apr 2013     End of Life 31 May 2014

Linux     16.0.0.1691 (EBF 21981 SP6)        31 Oct 2013
          On Demand 1.0.4613 (SP3)           13 Sep 2013
          12.0.1.3994 (EBF 22111 SP66)       27 Nov 2013
          12.0.1 Chinese,                    16 Apr 2013
                 Japanese Docs (Eclipse)     16 Apr 2013
          11.0.1.3027 (EBF 21786 SP98)       13 Sep 2013     End of Life 31 May 2014

Mac OS    16.0.0.1565 (EBF 21800 SP3)        13 Sep 2013
          12.0.1.4006 (EBF 22318 SP67)       24 Dec 2013
          11.0.1.2449 Update                 29 Jun 2010     End of Life 31 May 2014

Solaris   16.0.0.1691 (EBF 22267 SP6)        11 Dec 2013
 SPARC    12.0.1.3994 (EBF 22138 SP66)       03 Dec 2013
          11.0.1.2958 (EBF 21794 SP96)       08 Apr 2013     End of Life 31 May 2014

Solaris   16.0.0.1691 (EBF 22265 SP6)        11 Dec 2013
 x64      12.0.1.3994 (EBF 22139 SP66)       02 Dec 2013
          11.0.1.2958 (EBF 21750 SP96)       08 Apr 2013     End of Life 31 May 2014

Windows   16.0.0.1761 (EBF 22392 SP7)    *** 22 Jan 2014 ***
          On Demand 1.0.4613 (SP3)           13 Sep 2013
          12.0.1.3994 (EBF 22109 SP66)       27 Nov 2013
          12.0.1 French,                     25 Sep 2012
                 English,                    25 Sep 2012
                 German,                     25 Sep 2012
                 Chinese,                    28 Mar 2013
                 Japanese Docs (HTML/PDF)    28 Mar 2013
          11.0.1.3069 (EBF 22299 SP99)       19 Dec 2013     End of Life 31 May 2014 

Other Stuff...

 Older Updates

 Free support! Q&A forum
   ...or, call Tech Support

 SQL Anywhere...
   ...Sybase home page 
   ...SAP home page 
   ...SAP Developer Center 

 Buy SQL Anywhere 

 Developer Edition... 
   [16.0] [12.0.1] [11.0.1]

 Download the...
   Educational Edition 
   Web Edition 

 Supported Platforms...
   SQL Anywhere 
   Linux 
   OnDemand

 ODBC Drivers for MobiLink

The asterisks "***" show which items have appeared on the Sybase website since the previous version of this page.
  • Only the latest fully-supported versions of SQL Anywhere (11.0.1, 12.0.1, 16.0 and On Demand) are shown here.

  • The "EBF 21788 SP60" numbers are the new SAP-specific codes associated with the build numbers "12.0.1.3894".

  • Just because an older version or different platform isn't "fully supported" any more doesn't mean you can't download files (or ask questions, or get help), it just means there won't be any more new Updates released.

Friday, January 17, 2014

SQL Anywhere Sessions at Techwave 2014

Here are the SQL Anywhere sessions at the Techwave 2014 [cough] ISUG-TECH Conference in Atlanta on April 14 to 17.

Here's the email from ISUG...

from: ISUG-TECH Conference
date: Thu, Jan 16, 2014 at 10:35 AM
subject: Content for 2014 ISUG-TECH Conference Now Online

ISUG-TECH Conference, April 14-17, 2014
Atlanta, GA, USA

Dear ISUG-TECH Member,

We are pleased to announce that the content for our 2014 conference in Atlanta, GA on April 14-17, is now online and registration is currently open!

We will be presenting more than 150 hours of in-conference sessions, and more than 60 hours of optional workshops before and after the main event. We have some phenomenal speakers who will be sharing their experiences, best practices and advice with our attendees. Some of the biggest names in the SAP database and developer ecosystem will be in Atlanta so be sure not to miss out!

We'll have seven product tracks during the conference: SAP Adaptive Server Enterprise, SAP HANA, SAP IQ, SAP PowerDesigner, SAP PowerBuilder, SAP Replication Server and SAP SQL Anywhere. There will also be a general SAP Development track that will cover developing applications for a variety of SAP Products.

Now for some more good news! We still have some of our extra-special early bird slots available, but these will sell out quickly. If you're not aware of what these are, we are bundling three nights of hotel accommodations at the Hilton Atlanta in with the conference fee to the first 250 people to register. That will save you more than $400 to attend this major event.

We're pleased to announce that SAP is our Diamond Sponsor for this conference. The support and commitment we've received from SAP is outstanding and we are all excited to see the company's passion in making this event a big success. But we need you to join us to make that happen. General registration for the conference is $1195, but our Gold members will receive a special discount of $300 off that fee. (Learn more about our Gold membership here.)

So don't delay; take a moment now to explore the conference sessions and workshops, and then start your registration!

Best Regards,

The ISUG-TECH Conference Team

Wednesday, January 15, 2014

Capturing and Storing Plans With GRAPHICAL_PLAN()

An earlier article about calling the GRAPHICAL_PLAN() procedure in SQL Anywhere showed an alternative to Database Tracing wherein you add custom code to your stored procedures to capture query plans. Both techniques (custom code and Database Tracing) share one very important advantage: capturing realistic production query plans in their natural habitat is better than struggling (and failing) to simulate production scenarios in ISQL.

When used together with the Procedure Profiling feature, the custom code approach lets you focus on queries you know are slow (because the Procedure Profiler says so) rather than wading through the massive amounts of output that Database Tracing produces.

Yes, you can tell the Database Tracing feature to filter out fast queries, but that reveals another advantage to the custom code approach: You can easily capture different plans for the same query; for example, one kind of plan when it's running slow-like-continental-drift, and another when it runs quickly. Some of the hardest problems to solve involve queries with changing behavior, and the custom code approach lets you capture both behaviors and compare them. With Database Tracing, unless you capture everything (all the fast plans as well as the slow), you'll only see the slow plans.

This article expands on the earlier technique by storing the plans returned by GRAPHICAL_PLAN() in a table rather than immediately writing them to *.saplan files for display in ISQL's Graphical Plan Viewer.

Here is a step-by-step demonstration using a stored procedure that was running very slowly, the new "Connection History" page being added to the next version of the Foxhound database monitor.



Step 1: Restart the target database.

This will ensure the performance tests will at least begin with a fresh (empty) RAM cache. It's good to take advantage of a large cache to improve performance, but sometimes you can't rely on data being in memory so testing with an empty cache is important.

Step 2: Turn on the procedure profiler.

Here's how to launch the SQL Anywhere 16 procedure profiler in Sybase Central:
  • Connect to the target database.

  • Click on the Mode menu item, then

  • click on Application Profiling.

  • The Application Profiling frame will appear at the bottom of the window.

  • Check Stored procedure, function, trigger, or event execution time, then

  • click Finish.

  • The Profiling Stored Procedures, Function, Triggers, and Events display will appear.
To see the profile results:
  • Run your tests,

  • click on View - Refresh All, then

  • look in the Profiling Results tab.
To stop profiling:
  • Look in the Profiling Stored Procedures, Function, Triggers, and Events display, then

  • click on the Stop Collecting Profiling Information on Selected Databases.

Step 3: Test the application.

It helps to have a script to follow so you can do before-and-after comparisons to prove the changes you make are actually improvements. In this example, the script contained 19 distinct user commands via the GUI: open the page, scroll down one row, scroll down 20 rows, and so on.

Step 4: Examine the procedure profiling results.

There's no easy way to do this if you're running a lot of large stored procedures; you have to open each procedure in Sybase Central's Profiling Results tab, then scroll . . . slowly . . . all . . . the . . . way . . . through the SQL code while watching carefully for hot spots.

Step 5: Identify which queries need attention.

Here's the slowest query in the test; according to procedure profiler it was executed 19 times for a total of 40 seconds:



Step 6. Insert "eyecatcher" comments in the code.

Line numbers change, so does performance; adding an identifying comment to each slow query lets you keep track of hot spots even if you don't get around to working on them right away.
--\\\\\ Test 1 line 220 (19 40472 30 220) -- Determine oldest and newest connection history samples.

SELECT TOP 1
       rroad_group_2_property_pivot.sample_set_number
  INTO @oldest_connection_history_sample_set_number
  FROM rroad_group_2_property_pivot
 WHERE rroad_group_2_property_pivot.connection_id_string = @connection_id_string
 ORDER BY rroad_group_2_property_pivot.sample_set_number ASC;

Step 7. Turn off the procedure profiler.

The next step will be to capture graphical plans and you don't want the extra overhead of the procedure profiler to affect the results (although this may be a moot point since the procedure profiler seems to be a lightweight process as far as performance is concerned).

Step 8. Create a table to store the plans.

CREATE TABLE captured_plan (
   plan_id                 INTEGER NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
   query_title             VARCHAR ( 200 ) NOT NULL,
   capture_started_at      TIMESTAMP NOT NULL,
   capture_duration_msec   BIGINT NOT NULL,
   plan                    LONG VARCHAR NOT NULL DEFAULT '' );

The plan_id column is an artificial primary key.

The query_title column uniquely identifies each SQL query being studied so multiple plans for the same query can be studied together.

The capture_started_at column keeps a record of when the test was run; this can be handy when you're purging old rows.

The capture_duration_msec gives a rough guess of how long the query ran... it is not accurate because it includes the time required to build the graphical plan XML, but in many cases it is helpful when looking for fast versus slow queries; e.g., 100 msec is faster than 10,000 msec regardless of overhead.

The plan column holds the GRAPHICAL_PLAN() return values.

Step 9. Prepare to call GRAPHICAL_PLAN().

Each block of code (stored procedure, trigger, event, etc) that's going to call GRAPHICAL_PLAN() needs these four local variables:
/* START TEMPORARY CODE */
DECLARE @plan_id               INTEGER;
DECLARE @query_title           VARCHAR ( 200 );
DECLARE @capture_started_at    TIMESTAMP;
DECLARE @plan                  LONG VARCHAR;
/* END TEMPORARY CODE */

Step 10. Add calls to GRAPHICAL_PLAN().

Here's a template to copy and paste (and then edit) ahead of each slow query:
/* START TEMPORARY CODE */
SET @query_title = '[TEXT TO UNIQUELY IDENTIFY THE SQL QUERY]';
IF ( SELECT COUNT(*) FROM captured_plan WHERE query_title = @query_title ) 
< [MAXIMUMN NUMBER OF PLANS TO CAPTURE] 
THEN
   SET @plan_id = GET_IDENTITY ( 'captured_plan' );
   SET @capture_started_at = CURRENT TIMESTAMP;
   SET @plan = GRAPHICAL_PLAN (
      '
      [COPY AND PASTE THE SQL QUERY HERE]
      ',
      2, -- Detailed statistics including node statistics
      'asensitive',
      'READ-ONLY' );
   INSERT captured_plan VALUES ( 
      @plan_id,  
      @query_title,  
      @capture_started_at,  
      DATEDIFF ( MILLISECOND, @capture_started_at, CURRENT TIMESTAMP ), -- capture_duration_msec 
      @plan );
END IF;
/* END TEMPORARY CODE */
The template contains three placeholders for you to edit:
  • Line 2: Replace [TEXT TO UNIQUELY IDENTIFY THE SQL QUERY] with a string you will recognize when browsing through dozens of entries in the captured_plan table.

  • Line 4: Replace [MAXIMUMN NUMBER OF PLANS TO CAPTURE] with an integer like 1 or 2 to put a limit on the number of plans in case the query is executed many times.

  • Line 10: Replace [COPY AND PASTE THE SQL QUERY HERE] with the actual query, then double-up any embedded single quotes, and make any other necessary changes as discussed in Capture Plans With GRAPHICAL_PLAN().
The call to GRAPHICAL_PLAN() is coded ahead of the INSERT statement, rather than being included in the INSERT VALUES list, so a value can be calculated for the capture_duration_msec column.

Here's what the template looked like after editing:

/* START TEMPORARY CODE */
SET @query_title = 'Test 1 line 220';
IF ( SELECT COUNT(*) FROM captured_plan WHERE query_title = @query_title ) 
< 2  
THEN
   SET @plan_id = GET_IDENTITY ( 'captured_plan' );
   SET @capture_started_at = CURRENT TIMESTAMP;
   SET @plan = GRAPHICAL_PLAN (
      '
SELECT TOP 1
       rroad_group_2_property_pivot.sample_set_number
  FROM rroad_group_2_property_pivot
 WHERE rroad_group_2_property_pivot.connection_id_string = @connection_id_string
 ORDER BY rroad_group_2_property_pivot.sample_set_number ASC;
      ',
      2, -- Detailed statistics including node statistics
      'asensitive',
      'READ-ONLY' );
   INSERT captured_plan VALUES ( 
      @plan_id,  
      @query_title,  
      @capture_started_at,  
      DATEDIFF ( MILLISECOND, @capture_started_at, CURRENT TIMESTAMP ), -- capture_duration_msec 
      @plan );
END IF;
/* END TEMPORARY CODE */

Step 11. Restart the target database.

Step 12. Re-run the test.

Step 13. Study the graphical plans.

In this example, calls to GRAPHICAL_PLAN() were inserted ahead of seven slow queries, resulting in 12 plans being captured:
SELECT plan_id,
       LEFT ( query_title, 18 ) AS query_title,
       capture_started_at,
       capture_duration_msec
  FROM captured_plan 
 ORDER BY query_title, 
       plan_id;

    plan_id query_title        capture_started_at      capture_duration_msec 
----------- ------------------ ----------------------- --------------------- 
         11 Test 1 line 1286   2014-01-14 11:59:17.359                  6422 
          3 Test 1 line 1576   2014-01-14 11:56:39.953                  2093 
          7 Test 1 line 1576   2014-01-14 11:57:03.062                  1891 
          2 Test 1 line 1589   2014-01-14 11:56:35.781                  2656 
          6 Test 1 line 1589   2014-01-14 11:56:59.375                  2171 
          4 Test 1 line 1604   2014-01-14 11:56:43.562                    31 
          8 Test 1 line 1604   2014-01-14 11:57:06.484                    16 
          9 Test 1 line 1632   2014-01-14 11:57:54.578                    47 
         10 Test 1 line 1632   2014-01-14 11:58:05.953                    31 
         12 Test 1 line 1660   2014-01-14 11:59:36.406                  1906 
          1 Test 1 line 220    2014-01-14 11:55:58.000                 35125 
          5 Test 1 line 220    2014-01-14 11:56:57.812                   938 
Here's the code that created *.saplan files for the two plans for the query at line 220; plan_id = 1 for the first (slow) execution, and plan_id = 5 for the second (fast) execution:
SET TEMPORARY OPTION allow_write_client_file = 'On';
SELECT WRITE_CLIENT_FILE (
          STRING (
             'c:\\temp\\plan for ',
             query_title,
             ', plan_id ',
             plan_id,
             ', ',
             capture_duration_msec,
             ' msec',
             '.saplan' ),
          plan )
  FROM captured_plan 
 WHERE plan_id IN ( 1, 5 );

The WRITE_CLIENT_FILE() function was used instead of xp_write_file() because the target database was running on a different computer, and the SET TEMPORARY OPTION allow_write_client_file = 'On' statement was used so WRITE_CLIENT_FILE() would actually work.

Here are the two plans; they're identical except for the run times which shows that the RAM cache affected performance but didn't change the plan:

Step 14. Propose improvements and perform experiments.

Every application has at least one "Table From Hell". Foxhound is no exception, so it came as no surprise that the slow query in this example involved that table:
CREATE TABLE rroad_group_2_property_pivot (
   sampling_id                       UNSIGNED INTEGER NOT NULL,
   sample_set_number                 UNSIGNED BIGINT NOT NULL,
   connection_number                 BIGINT NOT NULL,
   connection_id_string              VARCHAR ( 50 ) NOT NULL COMPUTE ( STRING (  
                                        sampling_id,
                                        '-',
                                        connection_number, 
                                        '-',
                                        DATEFORMAT ( LoginTime, 'YYYYMMDDHHNNSS-SSS' ) ) ),
   ...
   PRIMARY KEY ( sample_set_number, connection_number ) );

CREATE INDEX xconnection_id_string ON rroad_group_2_property_pivot ( connection_id_string );

This table often contains millions of rows and consumes gigabytes of disk space; in this example it "only" contained 397,858 rows in 270M of disk space. Indexes do exist on the columns referenced in both the WHERE clause and the ORDER BY...

SELECT TOP 1
       rroad_group_2_property_pivot.sample_set_number
  FROM rroad_group_2_property_pivot
 WHERE rroad_group_2_property_pivot.connection_id_string = '1-21-20140114132910-406' 
 ORDER BY rroad_group_2_property_pivot.sample_set_number ASC
...but alas, SQL Anywhere's decision to use the PRIMARY KEY index did not result in very good performance.

SQL Anywhere did consider both the PRIMARY KEY index and the secondary CREATE INDEX xconnection_id_string...

Index Scan
Scan rroad_group_2_property_pivot using index rroad_group_2_property_pivot (all rows)

Indexes considered
rroad_group_2_property_pivot
seq
xconnection_id_string - Selectivity 3.741826117%

...and while it's tempting to try a FORCE INDEX ( xconnection_id_string ) clause to make it use the other index, that should never be the first path chosen. Even if it did improve performance, today's excellent FORCE INDEX choice could create tomorrow's Query From Hell, and it's often (always?) better to let SQL Anywhere decide how to build your plans.

Instead of making SQL Anywhere choose between two indexes on two different columns, one alternative is to create a single index involving both columns. In this case, connection_id_string was chosen as the first column in the index since it's the one performing a selection (connection_id_string = '1-21-20140114132910-406'), and sample_set_number was placed second in the index because it's used to ORDER BY the rows returned by the selection.

In this case, rather than create another index, the existing CREATE INDEX was changed from this:

CREATE INDEX xconnection_id_string ON rroad_group_2_property_pivot ( connection_id_string );

to this composite index:

CREATE INDEX xconnection_id_string ON rroad_group_2_property_pivot ( connection_id_string, sample_set_number );

When the query was retested in ISQL after restarting the target database, the result was startling; from 35 seconds down to (essentially) zero seconds:

Before...                           After...
Index Scan                          Index Only Retrieval Scan
index rroad_group_2_property_pivot  index xconnection_id_string
FirstRowRunTime 34.846              FirstRowRunTime 5.6441e-005

An "Index Only" scan is the Holy Grail of performance tuning; no data is read from the table, just the index, and the results here prove why it's popular.

Step 15. Make changes to optimize the queries.

In this case, "make changes" means putting the new CREATE INDEX into the Foxhound build.

Step 16. Comment-out the calls to GRAPHICAL_PLAN().

In this example, the new index definition was likely to improve many of the slow queries, so it's worth performing "after" performance tests on everything. The calls to GRAPHICAL_PLAN() add a lot of overhead, so they have to be removed... but not completely removed since some of the queries may still be slow, and they may be involved in the next round of optimization (thus requiring the calls to GRAPHICAL_PLAN() to be added back).

Step 17. Restart the target database.

Step 18: Turn on the procedure profiler.

Step 19. Re-run the test.

Step 20: Examine the new procedure profiling results.

Here's the slow query from before, now located at line 253. As predicted by the ISQL test in Step 14 the procedure profiler now reports a total of only 75 msec was required to run all 19 executions rather than the 40,472 msec reported in Step 5:

Step 21: Verify improvements and look for new hot spots.

In this example, the new CREATE INDEX did indeed speed up several of the old, slow queries, but not all of them... so the process begins again at Step 5, and repeats until the Third Rule Of Program Optimization applies:
3. When it's fast enough, stop.
What are the first two rules of program optimization?
1. Don't do it.

2. (for experts only) Don't do it yet.

Monday, January 13, 2014

Stellar Phoenix Repair for SQL Anywhere

Stellar Phoenix Repair for SQL Anywhere is a database recovery tool for version 9, 10, 11 and 12 SQL Anywhere databases that was introduced in September 2012. I don't have any databases that need recovery, so this isn't a product review, more like a "first glance".

Hubba, Hubba!

OK, at first glance it is impressive... any utility that can open a 16 million row, 11G .db file without launching SQL Anywhere itself, and display all the table names, and all the data, has my attention.

<

The GUI does have a few glitches. First of all, the right hand data pane doesn't display the table name, so it's easy to lose track of which table you're looking at when the name is no longer highlighted in the left pane... which is what happens when you click anywhere on the right pane.

Also, if you resize the column widths in the right pane, they tend to revert back to the (narrow) defaults if you click on something else, like the (same) table name in the left pane.

...but, the Stellar Phoenix Repair tool isn't ISQL or Sybase Central, you're not going to use it to write reports, you're going to use it ... when ... you're ... absolutely ... desperate to get your data back.

How well it works for that purpose, I don't know... and I'm not sure how to go about finding out other than waiting until I'm ... absolutely ... desperate :)


Friday, January 10, 2014

Wednesday, January 8, 2014

SQL Server Proxy Tables Via DSN-Less Connections

Question: How do I set up a DSN-less connection to a SQL Server database for the purpose of using proxy tables?

Answer: Creating and maintaining ODBC DSNs is a real pain in the [redacted]; using a DSN-less connection is actually quite easy:

CREATE SERVER mss 
   CLASS 'MSSODBC' 
   USING 'Driver=SQL Server Native Client 10.0; Server=Envy; Database=test; Trusted_Connection=yes;';

CREATE EXISTING TABLE proxy_t AT 'mss.test.dbo.t';

SELECT * FROM proxy_t;

       pkey        data 
----------- ----------- 
          1           1 
You can read about SQL Anywhere proxy tables here:
Remote data access
and SQL Server Trusted Connections here:
Choose an Authentication Mode


Monday, January 6, 2014

Making Faster Web Pages


"... new CSS 3.0 features can help create lightweight pages by providing built-in support for gradients, rounded borders, shadows, animations, transitions, and other graphical effects that previously required images to be loaded."
Rules for Mobile Performance Optimization by Tammy Everts, August 2013, Communications Of The ACM.

That's just one of many dozens of optimization tips, hints and ideas contained in an article with a really bad title: "Rules for Mobile Performance Optimization".

First of all, the article is all about web page optimization, nothing about hardware, not even apps.

Second, it isn't just about mobile web pages, the article applies to tethered desktop performance as well. Yes, there's stuff about optimizing m. websites, but much much more about web pages in general.

The Title Is Not Important

What's important is that the tips and ideas apply to websites served up by SQL Anywhere's built-in HTTP server, and that's why it's important to readers of this blog.

Here are a few tidbits from the article...
  • Concentrate on reducing the number of HTTP requests, reducing the size of responses and optimizing client-side code.

  • Embed scripts and images for first-time loading.

  • Compress images, scripts and CSS.

  • Minify scripts (just like we did with BASICA in 1981 :)

  • Resize images to match the display size.

  • Simplify pages with new HTML5 features.

  • Simplify pages with new CSS3 features.

  • Defer rendering below-the-fold images.

  • Defer loading scripts.
Guaranteed, there's stuff in this article you haven't thought of, stuff you've never heard of, and stuff you'll want to try... right now (like CSS3 features) or in the future (HTTP2).

Oh, About Those New Features...

Here are two websites mentioned in the article:
  • caniuse.com - Compatibility tables for support of HTML5, CSS3, SVG and more in desktop and mobile browsers

  • mobilehtml5.org - HTML5 compatibility on mobile and tablet browsers with testing on real devices