Dynamic Code Blocks

Microsoft Dynamics GP & .NET technical Blog by Tim Wappat

SQL bucket filling example permalink

I’m frightened that one day this sample disappears, its from a SQL Server Central.com thread, Filling Bucketsoriginally posted by “ChrisM@work.”

-- Originally from: 
-- http://www.sqlservercentral.com/Forums/Topic568498-338-1.aspx
DECLARE @AmountToAllocate INT = 21

;WITH Calculator AS (
SELECT
BucketID, TotalSize, Amount,
AmountLeftToAllocate = CASE
WHEN @AmountToAllocate > (TotalSize - Amount) THEN @AmountToAllocate - (TotalSize - Amount)
WHEN @AmountToAllocate < 0 AND ABS(@AmountToAllocate) > Amount THEN Amount + @AmountToAllocate
ELSE 0 END,
NewAmount = CASE
WHEN @AmountToAllocate > (TotalSize - Amount) THEN TotalSize
WHEN @AmountToAllocate < 0 AND ABS(@AmountToAllocate) > Amount THEN 0
ELSE Amount + @AmountToAllocate END
FROM dbo.Buckets
WHERE BucketID = 1
UNION ALL
SELECT
tr.BucketID, tr.TotalSize, tr.Amount,
AmountLeftToAllocate = CASE
WHEN lr.AmountLeftToAllocate > (tr.TotalSize - tr.Amount) THEN lr.AmountLeftToAllocate - (tr.TotalSize - tr.Amount)
WHEN lr.AmountLeftToAllocate < 0 AND ABS(lr.AmountLeftToAllocate) > tr.Amount THEN tr.Amount + lr.AmountLeftToAllocate
ELSE 0 END,
NewAmount = CASE
WHEN lr.AmountLeftToAllocate > (tr.TotalSize - tr.Amount) THEN tr.TotalSize
WHEN lr.AmountLeftToAllocate < 0 AND ABS(lr.AmountLeftToAllocate) > tr.Amount THEN 0
ELSE tr.Amount + lr.AmountLeftToAllocate END
FROM dbo.Buckets tr
INNER JOIN Calculator lr ON lr.BucketID + 1 = tr.BucketID
)
SELECT
BucketID,
TotalSize,
Amount = NewAmount,
OldAmount = Amount
FROM Calculator

It has been my preferred boilerplate/starting place for SQL challenges like stock allocation and bin allocation in Dynamics GP. So to protect it from getting lost on the internet and to aid me finding it quickly when I need it again, I’ve posted it here.

Alternative way to “open command prompt here”–windows hint

I was shown this at an MSDN event and use it regularly since. You can open a CMD prompt at the current location from windows explorer, simply by typing in the folder address bar, then hit return.

Navigate to location

Windows explorer showing folder contents

Type “cmd” into the address bar as shown below

explorer folder address bar has cmd typed into it

The cmd window will open up at the same folder location

cmd window opened at location of folder

This is equivalent to the more familiar [Shift]+Right Clicking in the explorer window and Open command window here

cmd4

Is eConnect Integration Service Multi-threaded?

Personally I have not experienced performance issues with eConnect, so the question to as if the integration service is multi-threaded or now had not crossed my mind before, so here are my thoughts so far after the question popped up.

What is the integration service?

eConnect Service shown from services mmc

The eConnect integration service is a windows service that wraps eConnect,  creating WCF end points, that forward calls to the underlying eConnect API methods. It literally just exposes methods like CreateEntity via WCF and takes any calls, forwarding them through to the eConnect CreateEntity method, its a few lines of code.

The WCF Service Behaviour is decorated with Instance ContextMode.Single and the default ConcurrencyMode.Single. Thus the WCF service is singleton, serialising every call into the WCF service:

   *   -- Processes messages on one thread. 
   *   -- Creates one service object per session

The eConnect library is not in itself multithreaded, for example, it does not intelligently break an eConnect document up into multiple parallel threads for execution. So the only opportunity for multi-threading to be at play has to be at the application that calls eConnect. Thus from what we know about the WCF end points, the conclusion has to be that eConnect Integration Service is not multiple threaded.

There is evidence of integrations in use that successfully pushing very large volumes through eConnect with no performance issue, so perhaps for most people its performance is not an issue. For those of you experiencing performance issues lets think a little more about this.

Improving throughput

It is feasible to create your own version of the Integration Service, in fact I wrote one when WCF first came out that was very similar to what was to follow later from Microsoft. It would be possible to add thread awareness to such a custom application, submitting econnect documents in parallel against the eConnect API.

Using multiple threads to push more through eConnect leads me to another concern, if a large volume of transactions are pushed through eConnect and they are of the same type, that rely on updating a central resource, say a batch table, next note ID, or next sop number, then a bottle neck exists.

At a SQL level the transactions are turned applied as SQL transactions so if one element of them fails the transactions can roll-back safely. Due to this, at a SQL level another transaction cannot proceed until the previous is committed (if a shared table locking dependencies exists). Hence the transactions end up serialised anyway. Thus little performance improvement is achieved by multi-threading the transaction pump that is feeding transactions into SQL server (eConnect).

The place to look for performance improvement would be in an investment into the SQL server hardware and configuration. With some investment in time and/or money, it is possible to get those transactions to happen as quick as possible. That is a win for the users and a win for the integrating app. Optimising the hardware and managing the SQL instance properly will bring larger improvements in throughput than just attempting to push more volume into the bottle neck of the transactions. This also lowers the probability of dead locks, something that will increase with the higher throughput of SQL transactions generated by integrations. A healthy well provisioned SQL server a leads to a happier GP application.

Testing theory

I built a little console application to test this thinking out. It spins up a variable number of worker threads and tries to create the same overall number of inventory items using those threads. The underlying SQL server load seemed to have more bearing on the processing times than the number of threads, which did not seem to affect how long it took to process the work load. I admit I have not thought this through deeply, there may be a flaw in my empirical testing but taking it at face value it confirms the theory in this post. I know the size of the eConnect transaction is tiny so it may not be telling me much.

Worker Threads

Processing Time

 

1

6.291

 

1

10.526

 

6

5.317

 

6

14.913

 

6

5.103

 

6

5.808

 

1

5.165

 

I do wonder if breaking large eConnect operations into smaller chunks for submission would result in faster processing results overall, something for investigation in the future…

Please comment on this post, if you have any alternative views or insight on this, I would be interested to hear them.

Steve Endow talks about the same subject back in 2014 with his post Is eConnect multi-threaded? In this post he talks about the windows message queue based eConnect Incoming service having thread settings, this is not to be confused with the relatively newer WCF based “Integration Service” we are talking about here.