Dynamic Code Blocks

Microsoft Dynamics GP & .NET technical Blog by Tim Wappat

Smart List, Excel cannot open file because file format or extension is not valid– Dynamics GP

Exporting from Dynamics GP Smart List to Excel can cause the following error on GP2013R2 upwards.

Excel cannot open the file “xx.xlsx” because the file format or the file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file

image

The community post SmartList Export - Excel cannot open the file... has some ideas for Smart List Builder and what may cause the issue however we saw this issue today in normal smart lists.

By prematurely stopping the export at different time intervals (press stop), it is possible to find the threshold record where the export fails. Limiting the number of columns in the smart list also narrows down to the field involved.  In this example it was some extended characters that had found their way from an ecommerce solution through econnect into the customer name field. It is also important to bear in mind that wrong assumptions about the tables and fields in the smart lists can be made. For example the customer name may originate from one entered in an order, not the debtor maintenance record. Try to establish the source of the data in the smart list.

Once the offending character was identified, in the GP user interface, that character was edited out of the name that resulted in the export to excel working correctly again.

The following Stack Overflow question helped by providing some TSQL function that allows such characters to be found:

Find Non-Ascii Characters in One Varchar Column or Mulitiple Columns using SQL Server

Following function looks for characters that are “non-ASCII”:

create function [dbo].[udf_ContainsNonASCIIChars]
(
@string nvarchar(4000),
@checkExtendedCharset bit
)
returns bit
as
begin

declare @pos int = 0;
declare @char varchar(1);
declare @return bit = 0;

while @pos < len(@string)
begin
select @char = substring(@string, @pos, 1)
if ascii(@char) < 32 or ascii(@char) > 126
begin
if @checkExtendedCharset = 1
begin
if ascii(@char) not in (9,124,130,138,142,146,150,154,158,160,170,176,180,181,183,184,185,186,192,193,194,195,196,197,199,200,201,202,203,204,205,206,207,209,210,211,212,213,214,216,217,218,219,220,221,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,248,249,250,251,252,253,254,255)
begin
select @return = 1;
select @pos = (len(@string) + 1)
end
else
begin
select @pos = @pos + 1
end
end
else
begin
select @return = 1;
select @pos = (len(@string) + 1)
end
end
else
begin
select @pos = @pos + 1
end
end

return @return;

end

 

The above function can be called against GP table fields like this, do so for any suspect fields and tables:

-- Find Non-ASCII in Debtor CUSTNAME 
select CUSTNAME
from RM00101
where
dbo.udf_ContainsNonASCIIChars(CUSTNAME, 1) = 1

-- Find Non-ASCII in Purchase order work, Vendorname
select *
from POP30300
where
dbo.udf_ContainsNonASCIIChars(VENDNAME, 1) = 1

-- Find Non-ASCII in Vendors, Vendorname
select *
from PM00200
where
dbo.udf_ContainsNonASCIIChars(VENDNAME, 1) = 1

Note that many characters such as “™ ® ” don’t seem to cause issues, the character that caused the problem in this case was a “T” like character finding its way in substitute for the “&” character. Investigating we find this is character code 22. This is useful as attempting to copy and paste this character causes issues in editors.

image

image

It can be seen that now there is a character code number to work with, a SQL replace is now possible, for occurrences of this character, in these fields. As always backup your data and don’t play if you don’t know what you are doing, instead contact your support partner.

SELECT REPLACE(VENDNAME, char(22), '&') from pm00200 where vendorid='DELO'

It is also a good idea to check your integration and integrating applications, to find the source of the corruption.

I would like some further investigation time to see what other characters can cause this error, I guess it is all the low value codes (below 32). It would be possible make a SQL reporting job to notify users of data issues if they occur again in the future.

Check for User Messages(1) Dynamics GP Process Monitor

Since GP 2013 sometimes when checking in the GP process monitor, (Microsoft Dynamics GP>> Process Monitor), the message “Check for User Messages(1)” may be found in the process queue.

The origin of this message is a new feature of GP that allows administrators to send messages to GP users. These messages pop-up on the users’ screens after a short delay.

This is achieved through the existence of a polling process that runs regularly on each client instance. That polling process checks for any messages waiting to display to the user. It is this process waiting in the queue that shows up as “Check for User Messages(1)”.

It will keep adding itself to the process queue, after running check links or similar very long processes, I’ve seen dozens of these all stacked up awaiting processing. These processes quickly pop off the queue and disappear, once it reaches the top of the queue.

See below process monitor screen shot for an example of stuck processes:

image

Send User Message Functionality

The new message functionality is found under:
Microsoft Dynamics GP>> Tools>> Utilities >> System>> Send Users Message

image

The user is selected, and message entered.

image

That after around 30 seconds or so results in the following on the user’s screen.

image

Obviously this process is polling a table in the DYNAMICS database. The table is SY30000 and the message is put in the “Offline_Message” field, see below where the message to me is “test”.

image

Stuck Process Queue

If the process queue gets stuck (crashes) it will no longer process queued jobs, however the “check for user messages” process continues to be added to the queue every min or so. As the queue has crashed and is no longer removing items, it just builds, filling up with this process. What is seen as a result is this process swamping the queue.

As the “check for user messages” process is the most frequent process to be added to the queue, it therefore becomes the most likely process to be added immediately after any crash occurs in the queue. This makes it appear, when looking at the process monitor, like the “check for user messages” caused the problem, however it is merely a victim. The actual process responsible was whichever process immediately preceded it in the queue, as that was the item that caused the queue to halt.

Investigation of the reason why that previous process crashed can be done by checking SQL logs, switching logging on in GP (use the Support Debugging tool), using SQL profiler, changing reports for unmodified vanilla ones if they have been changed etc. This is really a different subject and I’d encourage you to involve a GP consultant as instigations can be arduous.

SQL Server 2012 SP2 Replication Log Reader error v.11.00.5058

After moving Dynamics GP to a newly built server, using a back up and restore strategy, the Dynamics GP SQL replication kept causing problems. The following message drowned the error logs of the server. I ended up rebuilding replication configuration entirely to solve the issue, but there may be some clues in the steps I went through that might help someone else out there. There was lots more digging and prodding than summary shown below!

The last step did not log any message! (Source: MSSQL_REPL, Error number: MSSQL_REPL22037

image

It was possible to add text logging by going to the “Run Agent” step in the Log Reader SQL job and then adding to the list of the parameters, the following;

-Publisher [xxxxx] -PublisherDB [xxxxx] -Distributor [xxxxx] -DistributorSecurityMode 1  -Continuous -Output f:\SQLBackups\Reports\ReplicationOutputLog.txt

imageimage

The ReplicationOutputLog.txt then contained the following:

Validating publisher

            DBMS: Microsoft SQL Server
            Version: 11.00.5058
            catalog name:
            user name: dbo
            API conformance: 0
            SQL conformance: 0
            transaction capable: 1
            read only: F
            identifier quote char: "
            non_nullable_columns: 0
            owner usage: 15
            max table name len: 128
            max column name len: 128
            need long data len:
            max columns in table: 1000
            max columns in index: 16
            max char literal len: 131072
            max statement len: 131072
            max row size: 131072
2015-02-06 08:54:59.278 OLE DB xxxxxx 'xxxxxxx': {call sp_MSensure_single_instance(N'Repl-LogReader-0-xxxx-19', 2)}
2015-02-06 08:54:59.278 Parameter values obtained from agent profile:
            -pollinginterval 5000
            -historyverboselevel 1
            -logintimeout 15
            -querytimeout 1800
            -readbatchsize 500
            -readbatchsize 500000
2015-02-06 08:54:59.278 Status: 0, code: 22037, text: 'The last step did not log any message!'.
2015-02-06 08:54:59.278 The last step did not log any message!

 

We were also getting lots of errors relating to “Only one Log Reader Agent or log-related procedure”…”can connect to a database at a time” (see below for screenshot)

image

Google revealed some posts around SP1/SP2 hotfixes and cumulative updates for SQL 2012 fixing issues around area of multiple log reader errors like this. Other posts talked about the database not being upgraded correctly, new schema differences between versions. My conclusion on reading these posts was that the SQL replication database (distribution), may have been brought over from the previous server (we don’t know the full history of if it was restored or/and replication rebuilt by scripts). The restored database may not have been correctly upgraded by whatever is in SP1/SP2, both of which were applied prior to the restore of our data and thus any operations included in SP1/SP2 would have not been applied against it due to the time line.

After a few evenings spent removing all I could find of relevance in the replication distribution database tables and clearing out and recreating log reader agent jobs and snapshot agent jobs, still problems were persisting. After countless recreations of the publications and subscriptions, it felt like there were remnants of previous versions replication clinging on deep in the bowels of replication (amazing how gut feeling develops with experience).

Failing in patience and for lack of a silver bullet, the solution was to remove the subscriptions and the publications, disable replication on the server. Then ensured the replication distribution database and its data files were gone. Also ensured no SQL jobs relating to replication were left behind (there were so removed them too). Also checked all the nodes under SSMS that relate to replication to ensure nothing was left at all, including on the subscriber (there was a shadow of the subscription left on the subscriber).

Then built replication a new, created a new distributor database, with a new name for peace of mind. Created new publication, added articles, created new subscribers.

After reinitializing all subscriptions and getting snapshots, everything started working how it should.

I feel lucky that I don’t have a totally mission critical replication setup. It is good that we have two copies of our replicated tables in different schemas, with views fronting them. This allows us to toggle between live and cached data modes, while “messing on with replication”.

Hey its working!

image

The only thing left to figure out is if the “one log reader” error is something to worry about, or perhaps it will simply go away with the next service pack, whenever that is…

Automated restore of Live Company to Test Company in GP2013 CHANGES

Don’t be caught out by this change in the automated restore of test company database from live. This is something we do every Friday night on a scheduled SQL job, however since updating to GP2013 our test company is not working after the restore. Attempts to login are greeted by:

The selected company is not configured as a company of the current Microsoft Dynamics System Database

If you refer to my post Schedule restore of live company to test company Dynamics GP, then you will see that after the database is restored a script is ran to restore the company ID in the restored company database to update it to match that of the test company as registered in the DYNAMICS database.

A change has happened in GP2013, where the DYNAMICS database in GP can now be named anything you like, this has resulted in some changes to the way this script needs to work.

My first move was to refer to the original Microsoft KB that I got the information from the first time around, and yes there is a new script that forks depending on the presence of table SY00100. Check it out below, or better yet go to the KB itself, in case it has been updated since this post. 

Set up a test company that has a copy of live company KB 872370

So the new script for GP2013 (and previous versions) is:

if exists (select 1 from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'SY00100') begin
declare @Statement varchar(850)
select @Statement = 'declare @cStatement varchar(255)
declare G_cursor CURSOR for
select case when UPPER(a.COLUMN_NAME) in ('
'COMPANYID'',''CMPANYID'')
then '
'update ''+a.TABLE_NAME+'' set ''+a.COLUMN_NAME+'' = ''+ cast(b.CMPANYID as char(3))
else '
'update ''+a.TABLE_NAME+'' set ''+a.COLUMN_NAME+'' = ''''''+ db_name()+'''''''' end
from INFORMATION_SCHEMA.COLUMNS a, '
+rtrim(DBNAME)+'.dbo.SY01500 b
where UPPER(a.COLUMN_NAME) in ('
'COMPANYID'',''CMPANYID'',''INTERID'',''DB_NAME'',''DBNAME'')
and b.INTERID = db_name() and COLUMN_DEFAULT is not null
and rtrim(a.TABLE_NAME)+'
'-''+rtrim(a.COLUMN_NAME) <> ''SY00100-DBNAME''
order by a.TABLE_NAME
set nocount on
OPEN G_cursor
FETCH NEXT FROM G_cursor INTO @cStatement
WHILE (@@FETCH_STATUS <> -1)
begin
exec (@cStatement)
FETCH NEXT FROM G_cursor INTO @cStatement
end
close G_cursor
DEALLOCATE G_cursor
set nocount off'

from SY00100
exec (@Statement)
end
else begin
declare @cStatement varchar(255)
declare G_cursor CURSOR for
select case when UPPER(a.COLUMN_NAME) in ('COMPANYID','CMPANYID')
then 'update '+a.TABLE_NAME+' set '+a.COLUMN_NAME+' = '+ cast(b.CMPANYID as char(3))
else 'update '+a.TABLE_NAME+' set '+a.COLUMN_NAME+' = '''+ db_name()+'''' end
from INFORMATION_SCHEMA.COLUMNS a, DYNAMICS.dbo.SY01500 b
where UPPER(a.COLUMN_NAME) in ('COMPANYID','CMPANYID','INTERID','DB_NAME','DBNAME')
and b.INTERID = db_name() and COLUMN_DEFAULT is not null
order by a.TABLE_NAME
set nocount on
OPEN G_cursor
FETCH NEXT FROM G_cursor INTO @cStatement
WHILE (@@FETCH_STATUS <> -1)
begin
exec (@cStatement)
FETCH NEXT FROM G_cursor INTO @cStatement
end
close G_cursor
DEALLOCATE G_cursor
set nocount off
end

The old script for reference was:

/******************************************************************************/

/* Description: */

/* Updates any table that contains a company ID or database name value */

/* with the appropriate values as they are stored in the DYNAMICS.dbo.SY01500 table */

/* */

/******************************************************************************/

if not exists(select 1 from tempdb.dbo.sysobjects where name = '##updatedTables')
create table [##updatedTables] ([tableName] char(100))
truncate table ##updatedTables
declare @cStatement varchar(255)
declare G_cursor CURSOR for
select
case
when UPPER(a.COLUMN_NAME) in ('COMPANYID','CMPANYID')
then 'update '+ a.TABLE_SCHEMA + '.' + a.TABLE_NAME+' set '+a.COLUMN_NAME+' = '+ cast(b.CMPANYID as char(3))
else
'update '+ a.TABLE_SCHEMA + '.' + a.TABLE_NAME+' set '+a.COLUMN_NAME+' = '''+ db_name()+''''
end
from INFORMATION_SCHEMA.COLUMNS a, DYNAMICS.dbo.SY01500 b, INFORMATION_SCHEMA.TABLES c
where UPPER(a.COLUMN_NAME) in ('COMPANYID','CMPANYID','INTERID','DB_NAME','DBNAME', 'COMPANYCODE_I')
and b.INTERID = db_name() and a.TABLE_NAME = c.TABLE_NAME and c.TABLE_CATALOG = db_name() and c.TABLE_TYPE = 'BASE TABLE'
and a.TABLE_NAME <> 'UPR70501' and a.TABLE_NAME <> 'MPOSMAIN'
set nocount on
OPEN G_cursor
FETCH NEXT FROM G_cursor INTO @cStatement
WHILE (@@FETCH_STATUS <> -1)
begin
insert ##updatedTables select
substring(@cStatement,8,patindex('%set%',@cStatement)-9)
Exec (@cStatement)

FETCH NEXT FROM G_cursor INTO @cStatement
end
DEALLOCATE G_cursor
select [tableName] as 'Tables that were Updated' from ##updatedTables


NOTE: If you have already ran the old script, then the new script will now not execute, the error indicates that it can’t find company lookup table SY01500 in the company database. That is because it should be looking for it in the  DYNAMCS database.

A look at the script reveals the issue, the old script wrongly updates table SY00100 to the test database name (it is just dumbly looking for instances of DBNAME), it should have DYNAMICS in the DBNAME field to make the new script work, but it has the test company DBNAME in there, due to this issue. Update it to be whatever your DYNAMICS database name is, using DYNAMICS as example below (your CMPANYID may be different).

image

Now run the new script again and it will run successfully and you can now login to the test company.

I have updated my previous post to point to this post and changed the script there to protect the innocent.