Quantcast
Channel: SCN : Discussion List - SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications
Viewing all 685 articles
Browse latest View live

TDS_Capability packet difference between 12.5.4 and 15.7

$
0
0

(SAP Incident 914404 / 2014 )

 

Hello,

 

We migrated a 12.5.4 server to 15.7 but had to return to 12.5.4 because of different behaviour of the server.

 

We have a query that selects a char(4) column with content 'OK' using JConnect (jconn4).

In 12.5.4 the application gets 'OK' as value while when selecting form the 15.7 server, the application gets value 'OK  '.

This results in errors in the application.

 

My colleguae examined the trafic between client and server and found out that there is a difference in the TDS_CAPABILITY packet transfered between client and server.

 

In 15.7 bit 34 in the ValueMask is set to 1, while in 12.5.4 this bit is 0.

 

Bit 34 means :

#define

CS_RES_NOSTRIPBLANKS (CS_INT)34

 

Does anyone know about a traceflag to get the old behaviour ?

 

There is a 'new' jdbc-option, STRIP_BLANKS, that would strip trailing blanks, but, occording to the documentation, it  would also strip leading blanks. Some testing shows that this is not the case. We have to be sure that the documentation is wrong before turning this option on.

 

Thanks,

Luc.


message: Direct i/o enabled for ufs file

$
0
0

Hi All,

 

I Have an ASE Server  Adaptive Server Enterprise/15.5/EBF 19395 SMP ESD#5/P/Sun_svr4/OS 5.8/

 

 

I my ASE log I Have many intances of the following message:

 

00:25:00000:00916:2014/09/17 11:13:16.30 kernel  Direct i/o enabled for ufs file '/global/iva/log/dev_iva_log.dat'

00:11:00000:00397:2014/09/17 11:14:52.51 kernel  Direct i/o enabled for ufs file '/global/spm/log/dev_spm_log.dat '

00:57:00000:00277:2014/09/17 11:15:12.83 kernel  Direct i/o enabled for ufs file '/global/bidc/log/dev_bidc_log.dat '

00:10:00000:00258:2014/09/17 11:15:30.22 kernel  Direct i/o enabled for ufs file '/global/sef/log/dev_sef_log.dat '

00:23:00000:00386:2014/09/17 11:16:09.01 kernel  Direct i/o enabled for ufs file '/global/snc/log/dev_snc_log.dat'

00:13:00000:00450:2014/09/17 11:16:14.14 kernel  Direct i/o enabled for ufs file '/global/cam/log/dev_cam_log.dat'

 

 

this message represents a problem?

 

I found in the Sybase solved cases this case related with my issue: Solved Cases (#11783990) but when I try to open it, Does Not possible, Actually I have a user and password for this site

 

 

 

Thank you for your help

France - French "What's New" session : SAP ASE 16 & SAP IQ - 15/10

ASE MDA columns Hkgc*

$
0
0

In monOpenObjectActivity I have objects with in the 500K range in HkgcOverflows and HkgcPending.  This has occurred after a restart only a few days ago.  The counters for some objects have obviously wrapped around the 32-bit limit as they are around 2B.  From the scant information I have found so far, that indicates the Housekeeper Garbage Collection queue is building up work and sometimes overflowing its queue.  That sounds like something I would want to try and tune.  Has anyone know if this is something I should be concerned about?  If so, how to tune?  BTW, the queue length per processor in monEngine.HkgcMaxQize = 13360.  I can't find where that is configured if that is what I am supposed to tune.  Thanks!  Doug

Can't kill process created by Sybase Java

$
0
0

Hi ,

I havae some problem with new application worked with Sybase Java.

When I reboot application worked with Sybase Java I  see in sysprocesses next

 

1015171206488927recv sleep  703TestjTDS123AWAITING COMMAND1060529040[NULL][NULL]40960EC2MEDIUMANYENGINE041[NULL]000########111.111.111.111[NULL]

 

 

and when I exec

"kill 1015 " i haven't rezult.

I exec select on sysprocesses  and see that spid.

 

Maybe somebody had similar situation?

how kill this process, because situation repeat only woth process Java.

Version Sybase ASE 15.5 ESD 5.1

C# BulkCopy only sometimes supports unsigned bigint fields

$
0
0

We're trying to bulk copy into a table with an unsigned bigint column and sometimes it works and sometimes it doesn't

 

I've created a variety of tables and BCP-ed into them.

Here are the tables and whether they work,

 

Table : lock scheme AllPages. BCP WORKS

UId    int

BId    smallint

Line   int

PID    int

PDate  datetime

PV     varchar(255)

ES     unsigned bigint

 

Table : lock scheme DataRows. BCP FAILS

UId    int

BId    smallint

Line   int

PID    int

PDate  datetime

PV     varchar(255)

ES     unsigned bigint

 

(Change type to signed bigint)

Table : lock scheme DataRows. BCP WORKS

UId    int

BId    smallint

Line   int

PID    int

PDate  datetime

PV     varchar(255)

ES     bigint

 

(Make unsigned bigint not the last column)

Table : lock scheme DataRows. BCP WORKS

UId    int

BId    smallint

Line   int

PID    int

PDate  datetime

ES     unsigned bigint

PV     varchar(255)

 

It seems to be that unsigned bigint columns work so long as its not the last column.

Is this a known limitation/bug ?

Is there a fix ?

C# BulkCopy column mappings don't work with IDataReader

$
0
0

We're using C# ADO 15.7 SDK SP122

 

and we've found the C# BulkCopy column mappings don't work with IDataReader.

 

If we set up the mappings (AseBulkCopyMapping) and use WriteToServer(DataTable) it all works correctly.

If we use WriteToServer(IDataReader) then the mappings are ignored.

 

Comparing the code AseBulkCopyBusinessBulk(IDataReader reader, AseBulkCopy bulkCopy, int batchSize)

to AseBulkCopyBusinessBulk(DataTable table, DataRowState rowState, AseBulkCopy bulkCopy, int batchSize)

 

there's code to handle the mappings for DataTable.

 

Am I reading this correctly ? Is this fixed in a later version ?

 

Thanks

What's reason cause the insert delay in tempdb and how to improve it?

$
0
0

I have ASE 12.5. The tempdb is on ramdisk.

Then I have a SP with a query like

select * from mytab....

...

 

When I run this SP, only 11 rows for the result. It took about 2 seconds.

Then I change the SP to keep result in a named temp table like:

 

Create table #tmptab...

 

Insert into #tmptab

select * from mytab....

 

then run it again, it's took about 6 seconds. big difference. As only 11 rows and everything should be done in memory, The speed should not slower than that. How to figure it out and improve the performance for this case?


how to use temp table in inside a procedure called?

$
0
0

Suppose I have a stored procedure sp1 which call another procedure sp2 like:

 

create proc sp1

as

begin

 

    CREATE TABLE #table1 (......)

    Exec sp2

 

end

 

then I want to use #table1 in sp2, like

 

create proc sp2

as

begin

 

....

   delete from #table1

   INSERT INTO #table1

......

end

 

How to pass #table1 to sp2? Anywhere to do this?

best size of procedure cache size?

$
0
0

here is my dbcc memusage output:

 

DBCC execution completed. If DBCC printed error messages, contact a user with System Administrator (SA) role.

Memory Usage:

 

                                   Meg.           2K Blks                Bytes

      Configured Memory:     14648.4375           7500000          15360000000

Non Dynamic Structures:         5.5655              2850              5835893

     Dynamic Structures:        70.4297             36060             73850880

           Cache Memory:     13352.4844           6836472          14001094656

      Proc Cache Memory:        85.1484             43596             89284608

          Unused Memory:      1133.9844            580600           1189068800

 

So if proc cache is too small? I can put used memory 1133M to proc cache. but as many suggested that proc cache should be 20% of total memory.

 

Not sure it should be 20% of max memory or Total named cache memory?

How to create a composite index?

$
0
0

Most of time, I have no problem to create composite index.

 

Try to create a composite index on a table and got following error message:

 

Number (1903) Severity (16) State (1) Server (Myserver) 600 is the maximum allowable size of an index.  Composite index specified is 602 bytes.

 

It is because one column with 600 length(univarchar(600).

 

I have query with condition like:

 

where col1 =1 and col2 like 'abc%'

 

col2 is univarchar(600). I try to create index(col1+col2). Without proper index, query has bad performance.

 

How to resolve this problem?

db error caused by dbcc memusage

$
0
0

issue dbcc memusage and system is frozen, then got following error

 

00:00000:00000:2014/10/16 09:50:23.91 kernel  secleanup: time to live expired on engine 1

00:00000:00009:2014/10/16 09:50:23.93 kernel  Terminating the listener with protocol tcp, host myserver, port 4101 because the listener execution context is located on engine 1, which is not responding.

00:00000:00009:2014/10/16 09:50:23.93 kernel  ************************************

00:00000:00009:2014/10/16 09:50:23.93 kernel  curdb = 1 tempdb = 2 pstat = 0x200

00:00000:00009:2014/10/16 09:50:23.93 kernel  lasterror = 0 preverror = 0 transtate = 1

00:00000:00009:2014/10/16 09:50:23.93 kernel  curcmd = 0 program =

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cd0847 upsleepgeneric+0x437()

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cadcae listener_checkagain+0x1be()

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cad938 listener+0x338()

00:00000:00009:2014/10/16 09:50:23.93 kernel  pc: 0x0000000000cce388 kpstartproc+0x48()

00:00000:00009:2014/10/16 09:50:23.93 kernel  end of stack trace, spid 10, kpid 1638425, suid 0

00:00000:00009:2014/10/16 09:50:23.93 kernel  Started a new listener task with protocol tcp, host myserver, port 4101.

 

then system repair itself within 1-2 minutes.

dbcc memusage is fine on another staging server.

what's the possible reason for this? hardware?

Which one has better performance: like, charindex, patindex

$
0
0

in a query where condition clause. to check if string a contain b, I have a few options with ASE 12.5.4:

 

a like '%b%'

charindex(b,a)>0

patindex('%b%', a)>0

 

 

which one has better performance?

question on dbcc message

$
0
0

I am running dbcc checkdb and got following message for 2 tables:

 

Checking mytab1: Logical pagesize is 2048 bytes  ---long time

Checking mytab1: Logical pagesize is 2048 bytes

The total number of data pages in this table is 390.

The total number of pages which could be garbage collected to free up some space is 336.

The total number of deleted rows in the table is 3.

The total number of pages with more than 50 percent insert free space is 6.

Table has 2580 data rows.

 

Checking mytab2: Logical pagesize is 2048 bytes

The total number of data pages in this table is 82265.

The total number of pages which could be garbage collected to free up some space is 5431.

The total number of deleted rows in the table is 763.

 

They are small table, but dbcc took long time on it. Does it mean something wrong with these 2 tables?

How to figure out and resolve the problem if any?

Events Cluster Edition

$
0
0

Hi

 

 

I have an ASE Cluster Edition 15.7 SP121 and in rare circumstances all the processes in the database been became in status sleeping. There is not  errors in the Sybase ErrorLog. The Event ID for these process is 512 (waiting for buffer validation in cmcc_bufsearch) and 509 (waiting for buffer read in cmcc buf search). Additionally I need documentation about the possible events in Cluster Edition.

 

The solution until the moment has been Kill on dataserver because there is not posible kill the process or restart the server.

 

Anybody Can help me?.


Issue in Loading compressed dump to archive database using ADA feature

$
0
0

Hi All,


One of our business suite customer  is facing problem in Loading the compressed database to the archivedb using Sybase ADA feature.

 

When we loaded the dump to the archive database, following errors occured.


1> load database archivedb

from '/backupdir_IC5/IC5.DB.20141105.141506.000'

2> go

Msg 15700, Level 16, State 4:

Server 'IC5', Line 1:

Failed to read the virtual page '283331' from the device '14'.

In the errorlog we see the below errors:
From IC5.log

00:0002:00000:00060:2014/11/05 14:48:46.80 kernel Initializing virtual device 14, '/backupdir_IC5/IC5.DB.20141105.141506.000' with dsync 'off'.00:0002:00000:00060:2014/11/05 14:48:46.80 kernel Virtual device 14 started using asynchronous i/o.
00:0002:00000:00060:2014/11/05 14:48:47.06 server Compressed length (66030) is too big. File probably corrupted.
00:0002:00000:00060:2014/11/05 14:48:47.08 kernel Deactivating virtual device 14, '/backupdir_IC5/IC5.DB.20141105.141506.000'.


Customer ASE versin is  15.7 SP122 on SUSE Linux Enterprise 11 SP2 and user database size is 40 GB


From my Analysis:
I have tried reproducing the issue from our end using same server configuration and dbsize as 4 GB  but could not able to reproduce the issue at our end.
tried increasing the archivedb size at the customer end but that also did not help.

Customer is using compression for the dump database, for the smaller databases like sybsystemprocs with compression it loads perfectly but for the SID database with compression it fails but without compression it works fine.

Is there any restriction with respect to compression and the database size (40 GB) for using the ADA feature ? Customer feels the database size and for the BS database there are some restrictions.

Can you please help me on this to answer this question and to resolve the issue.


Specifications:
Customer uses the following dump configuration to take backup:

Dump Configuration
------------------


[Dump Configuration:IC5DB]
stripe directory = /backupdir_IC5
external api name = DEFAULT
number of stripes = DEFAULT
number of retries = DEFAULT
block size = DEFAULT
compression level = 101
retain days = DEFAULT
init = DEFAULT
verify = header
notify = DEFAULT
remote backup server name = DEFAULT

 

Thanks & Regards,
Amit Kumar Singh

Sybase Cluster, Heartbeat timeout and retries

$
0
0

Bonjour,

   I am struggling to find that information and so far nothing.

 


My cluster database ASE 15.7 is serving around a hundred different databases.  Last week I experienced a severe crash and when trying to restard the cluster I ended up with this :

 

ARN  - 458:The instance instancename did not start before the command timed out.

This may mean that the instance needed more time to start or perhaps was not configured properly.  Check the instance log for more information.

 

I am trying to find where are stored those parameters and how can I change them to let my database wait longer or retry more time.

 

What would you recommend, more retries or wait longer?

 

As usual, thanks for your time and expertise, appreciated.

 

William

Connecting to ASE instance fails due to "kernel Warning: The internal timer is not progressing."

$
0
0

Hi experts,

 

When I tried to connect to an Sybase ASE instance "ASE1570_S1", I got following error:

 

[sybase@rhel64-ase-tgt ~]$ isql -Usa -Psybase -SASE1570_S1

CT-LIBRARY error:

        ct_connect(): user api layer: internal Client Library error: Read from the server has timed out.

CT-LIBRARY error:

        ct_connect(): network packet layer: internal net library error: Net-lib operation timed out

 

I can see following warning in error log /opt/sybase/errorlogs/ASE1570_S1.log:

 

00:0001:00000:00000:2013/12/30 15:10:29.54 kernel  Warning: The internal timer is not progressing. If this message is generated multiple times, report to Sybase Technical Support and restart the server (alarminterval=-1162).

00:0001:00000:00000:2013/12/30 15:20:29.54 kernel  Warning: The internal timer is not progressing. If this message is generated multiple times, report to Sybase Technical Support and restart the server (alarminterval=-7162).

 

The Adaptive Server version is:

 

Adaptive Server Enterprise/15.7/EBF 21341 SMP SP101 /P/x86_64/Enterprise Linux/ase157sp101/3439/64-bit/FBO/Thu Jun  6 16:08:18 2013

 

Any hint would be high appreciated.

high value in sysmon "object manager spinlock contention"

$
0
0

Hi,

Lately a lot of problem with sybase....

 

 

At first.

2 day ago we have problem with high contention on 2 parttions in default data cache(all number partitions 32 in cache).

for resolve this problem we created 2 named cache(for hot table).

 

 

Now we have high value 'object manager spinlock contention' in sysmon.(peak near 95%).

 

 

Users work 1-2 hour and after that we have peak.

 

 

We use for hot table dbcc_tune,but despite it we have problems.

 

Maybe somebody solved this problem or know some new monitoring how find process influencing on this value?

 

 

On attach sysmon.&rezult of 'sp_sysmon 'cache wizard''.(sysmon in rar archive)

Sybase ASE 15.7 not certified for SuSE Enterprise 11 SP3

$
0
0

Sybase ASE 15.7 is still not certified for SLES 11.3. ASE 15.7 is only certified with SLES 11.2

 

Source:

http://certification.sybase.com/ucr/platformDetail.do?certId=2826&platformId=117&productId=2

 

The dilemma is, that SLES 11.2 is out of general support since January 2014, because FCS-Date of SLES11.2 was 01 Jul 2013.

 

SUSE generally releases service packs every 12 to 18 months. Once a new service pack is released, we provide the industry standard six-month window in which we continue to deliver the critical support you need for the previous service pack, while you plan for, test and implement the new service pack release.

 

Source:

SUSE Linux Enterprise: Support Lifecycle for Platform Products

 

What are you doing, run not supported OS or not certified DB?

Is SAP/Sybase providing support when we run ASE 15.7 on SLES11.3, because we have OS related issues and getting no support from Novell (SuSE)?

 

Has anyone experience with this problem?

Viewing all 685 articles
Browse latest View live


Latest Images